Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
Full State Feedback Control for Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Tillay
This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less
Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.
2013-01-01
This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
Method and apparatus for detecting cyber attacks on an alternating current power grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEachern, Alexander; Hofmann, Ronald
A method and apparatus for detecting cyber attacks on remotely-operable elements of an alternating current distribution grid. Two state estimates of the distribution grid are prepared, one of which uses micro-synchrophasors. A difference between the two state estimates indicates a possible cyber attack.
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dexin; Yang, Liuqing; Florita, Anthony
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dexin; Yang, Liuqing; Florita, Anthony
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian
2014-07-01
This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Decentralized Observer with a Consensus Filter for Distributed Discrete-Time Linear Systems
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Mandic, Milan
2011-01-01
This paper presents a decentralized observer with a consensus filter for the state observation of a discrete-time linear distributed systems. In this setup, each agent in the distributed system has an observer with a model of the plant that utilizes the set of locally available measurements, which may not make the full plant state detectable. This lack of detectability is overcome by utilizing a consensus filter that blends the state estimate of each agent with its neighbors' estimates. We assume that the communication graph is connected for all times as well as the sensing graph. It is proven that the state estimates of the proposed observer asymptotically converge to the actual plant states under arbitrarily changing, but connected, communication and sensing topologies. As a byproduct of this research, we also obtained a result on the location of eigenvalues, the spectrum, of the Laplacian for a family of graphs with self-loops.
Distributed and decentralized state estimation in gas networks as distributed parameter systems.
Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry
2015-09-01
In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Leveraging AMI data for distribution system model calibration and situational awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini
The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less
Leveraging AMI data for distribution system model calibration and situational awareness
Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...
2015-01-15
The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
Gronberg, Jo Ann M.; Spahr, Norman E.
2012-01-01
The U.S. Geological Survey’s National Water-Quality Assessment program requires nutrient input for analysis of the national and regional assessment of water quality. Detailed information on nutrient inputs to the environment are needed to understand and address the many serious problems that arise from excess nutrients in the streams and groundwater of the Nation. This report updates estimated county-level farm and nonfarm nitrogen and phosphorus input from commercial fertilizer sales for the conterminous United States for 1987 through 2006. Estimates were calculated from the Association of American Plant Food Control Officials fertilizer sales data, Census of Agriculture fertilizer expenditures, and U.S. Census Bureau county population. A previous national approach for deriving farm and nonfarm fertilizer nutrient estimates was evaluated, and a revised method for selecting representative states to calculate national farm and nonfarm proportions was developed. A national approach was used to estimate farm and nonfarm fertilizer inputs because not all states distinguish between farm and nonfarm use, and the quality of fertilizer reporting varies from year to year. For states that distinguish between farm and nonfarm use, the spatial distribution of the ratios of nonfarm-to-total fertilizer estimates for nitrogen and phosphorus calculated using the national-based farm and nonfarm proportions were similar to the spatial distribution of the ratios generated using state-based farm and nonfarm proportions. In addition, the relative highs and lows in the temporal distribution of farm and nonfarm nitrogen and phosphorus input at the state level were maintained—the periods of high and low usage coincide between national- and state-based values. With a few exceptions, nonfarm nitrogen estimates were found to be reasonable when compared to the amounts that would result if the lawn application rates recommended by state and university agricultural agencies were used. Also, states with higher nonfarm-to-total fertilizer ratios for nitrogen and phosphorus tended to have higher urban land-use percentages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
State estimation for distributed systems with sensing delay
NASA Astrophysics Data System (ADS)
Alexander, Harold L.
1991-08-01
Control of complex systems such as remote robotic vehicles requires combining data from many sensors where the data may often be delayed by sensory processing requirements. The number and variety of sensors make it desirable to distribute the computational burden of sensing and estimation among multiple processors. Classic Kalman filters do not lend themselves to distributed implementations or delayed measurement data. The alternative Kalman filter designs presented in this paper are adapted for delays in sensor data generation and for distribution of computation for sensing and estimation over a set of networked processors.
Establishment of a center of excellence for applied mathematical and statistical research
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.
Spatial distribution of water supply in the coterminous United States
Thomas C. Brown; Michael T. Hobbins; Jorge A. Ramirez
2008-01-01
Available water supply across the contiguous 48 states was estimated as precipitation minus evapotranspiration using data for the period 1953-1994. Precipitation estimates were taken from the Parameter- Elevation Regressions on Independent Slopes Model (PRISM). Evapotranspiration was estimated using two models, the Advection-Aridity model and the Zhang model. The...
The Use of Collateral Information in Proficiency Estimation for the Trial State Assessment.
ERIC Educational Resources Information Center
Mazzeo, John; And Others
The adequacy of several approaches to estimation of proficiency distributions for the Trial State Assessment (TSA) in eighth grade mathematics of the National Assessment of Educational Progress was examined. These approaches are more restrictive than the estimation procedures originally used, with the same kind of plausible-values approach that…
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
NASA Astrophysics Data System (ADS)
Bezel, Ilya; Gaffney, Kelly J.; Garrett-Roe, Sean; Liu, Simon H.; Miller, André D.; Szymanski, Paul; Harris, Charles B.
2004-01-01
The ability of time- and angle-resolved two-photon photoemission to estimate the size distribution of electron localization in the plane of a metal-adsorbate interface is discussed. It is shown that the width of angular distribution of the photoelectric current is inversely proportional to the electron localization size within the most common approximations in the description of image potential states. The localization of the n=1 image potential state for two monolayers of butyronitrile on Ag(111) is used as an example. For the delocalized n=1 state, the shape of the signal amplitude as a function of momentum parallel to the surface changes rapidly with time, indicating efficient intraband relaxation on a 100 fs time scale. For the localized state, little change was observed. The latter is related to the constant size distribution of electron localization, which is estimated to be a Gaussian with a 15±4 Å full width at half maximum in the plane of the interface. A simple model was used to study the effect of a weak localization potential on the overall width of the angular distribution of the photoemitted electrons, which exhibited little sensitivity to the details of the potential. This substantiates the validity of the localization size estimate.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks
NASA Technical Reports Server (NTRS)
Rahmani, Amirreza; Mesbahi, Mehran; Fathpour, Nanaz; Hadaegh, Fred Y.
2008-01-01
In this work, we develop an approach to formation estimation by explicitly characterizing formation's system-theoretic attributes in terms of the underlying inter-spacecraft information-exchange network. In particular, we approach the formation observer/estimator design by relaxing the accessibility to the global state information by a centralized observer/estimator- and in turn- providing an analysis and synthesis framework for formation observers/estimators that rely on local measurements. The noveltyof our approach hinges upon the explicit examination of the underlying distributed spacecraft network in the realm of guidance, navigation, and control algorithmic analysis and design. The overarching goal of our general research program, some of whose results are reported in this paper, is the development of distributed spacecraft estimation algorithms that are scalable, modular, and robust to variations inthe topology and link characteristics of the formation information exchange network. In this work, we consider the observability of a spacecraft formation from a single observation node and utilize the agreement protocol as a mechanism for observing formation states from local measurements. Specifically, we show how the symmetry structure of the network, characterized in terms of its automorphism group, directly relates to the observability of the corresponding multi-agent system The ramification of this notion of observability over networks is then explored in the context of distributed formation estimation.
Multi-year Estimates of Methane Fluxes in Alaska from an Atmospheric Inverse Model
NASA Astrophysics Data System (ADS)
Miller, S. M.; Commane, R.; Chang, R. Y. W.; Miller, C. E.; Michalak, A. M.; Dinardo, S. J.; Dlugokencky, E. J.; Hartery, S.; Karion, A.; Lindaas, J.; Sweeney, C.; Wofsy, S. C.
2015-12-01
We estimate methane fluxes across Alaska over a multi-year period using observations from a three-year aircraft campaign, the Carbon Arctic Reservoirs Vulnerability Experiment (CARVE). Existing estimates of methane from Alaska and other Arctic regions disagree in both magnitude and distribution, and before the CARVE campaign, atmospheric observations in the region were sparse. We combine these observations with an atmospheric particle trajectory model and a geostatistical inversion to estimate surface fluxes at the model grid scale. We first use this framework to estimate the spatial distribution of methane fluxes across the state. We find the largest fluxes in the south-east and North Slope regions of Alaska. This distribution is consistent with several estimates of wetland extent but contrasts with the distribution in most existing flux models. These flux models concentrate methane in warmer or more southerly regions of Alaska compared to the estimate presented here. This result suggests a discrepancy in how existing bottom-up models translate wetland area into methane fluxes across the state. We next use the inversion framework to explore inter-annual variability in regional-scale methane fluxes for 2012-2014. We examine the extent to which this variability correlates with weather or other environmental conditions. These results indicate the possible sensitivity of wetland fluxes to near-term variability in climate.
Murguía-Romero, Miguel; Jiménez-Flores, Rafael; Villalobos-Molina, Rafael; Méndez-Cruz, Adolfo René
2012-09-01
The geographical distribution of the metabolic syndrome (MetS) prevalence in young Mexicans (aged 17-24 years) was estimated stepwise starting from its prevalence based on the body mass index (BMI) in a study of 3,176 undergraduate students of this age group from Mexico City. To estimate the number of people with MetS by state, we multiplied its prevalence derived from the BMI range found in the Mexico City sample by the BMI proportions (range and state) obtained from the Mexico 2006 national survey on health and nutrition. Finally, to estimate the total number of young people with MetS in Mexico, its prevalence by state was multiplied by the share of young population in each state according to the National Population and Housing Census 2010. Based on these figures, we estimated the national prevalence of MetS at 15.8%, the average BMI at 24.1 (standard deviation = 4.2), and the prevalence of overweight people (BMI ≥25) of that age group at 39.0%. These results imply that 2,588,414 young Mexicans suffered from MetS in 2010. The Yucatan peninsula in the south and the Sonora state in the north showed the highest rates of MetS prevalence. The calculation of the MetS prevalence by BMI range in a sample of the population, and extrapolating it using the BMI proportions by range of the total population, was found to be a useful approach. We conclude that the BMI is a valuable public health tool to estimate MetS prevalence in the whole country, including its geographical distribution.
GPS FOM Chimney Analysis using Generalized Extreme Value Distribution
NASA Technical Reports Server (NTRS)
Ott, Rick; Frisbee, Joe; Saha, Kanan
2004-01-01
Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Real-Time Safety Monitoring and Prediction for the National Airspace System
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil
2016-01-01
As new operational paradigms and additional aircraft are being introduced into the National Airspace System (NAS), maintaining safety in such a rapidly growing environment becomes more challenging. It is therefore desirable to have both an overview of the current safety of the airspace at different levels of granularity, as well an understanding of how the state of the safety will evolve into the future given the anticipated flight plans, weather forecasts, predicted health of assets in the airspace, and so on. To this end, we have developed a Real-Time Safety Monitoring (RTSM) that first, estimates the state of the NAS using the dynamic models. Then, given the state estimate and a probability distribution of future inputs to the NAS, the framework predicts the evolution of the NAS, i.e., the future state, and analyzes these future states to predict the occurrence of unsafe events. The entire probability distribution of airspace safety metrics is computed, not just point estimates, without significant assumptions regarding the distribution type and or parameters. We demonstrate our overall approach by predicting the occurrence of some unsafe events and show how these predictions evolve in time as flight operations progress.
Aerial Surveys Give New Estimates for Orangutans in Sabah, Malaysia
Gimenez, Olivier; Ambu, Laurentius; Ancrenaz, Karine; Andau, Patrick; Goossens, Benoît; Payne, John; Sawang, Azri; Tuuga, Augustine; Lackman-Ancrenaz, Isabelle
2005-01-01
Great apes are threatened with extinction, but precise information about the distribution and size of most populations is currently lacking. We conducted orangutan nest counts in the Malaysian state of Sabah (North Borneo), using a combination of ground and helicopter surveys, and provided a way to estimate the current distribution and size of the populations living throughout the entire state. We show that the number of nests detected during aerial surveys is directly related to the estimated true animal density and that a helicopter is an efficient tool to provide robust estimates of orangutan numbers. Our results reveal that with a total estimated population size of about 11,000 individuals, Sabah is one of the main strongholds for orangutans in North Borneo. More than 60% of orangutans living in the state occur outside protected areas, in production forests that have been through several rounds of logging extraction and are still exploited for timber. The role of exploited forests clearly merits further investigation for orangutan conservation in Sabah. PMID:15630475
Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.
Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping
2017-01-31
In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
This study presents the integrated volume method for estimating saturation pressure and enthalpy of vaporization of a whole aerosol distribution. We measure the change of total volume of an aerosol distribution between a reference state and several heated states, with the heating...
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen
This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vectormore » regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.« less
2017-01-01
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604
Porru, Marcella; Özkan, Leyla
2017-08-30
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.
Cong, Zhang
2018-03-01
Based on extended state observer, a novel and practical design method is developed to solve the distributed cooperative tracking problem of higher-order nonlinear multiagent systems with lumped disturbance in a fixed communication topology directed graph. The proposed method is designed to guarantee all the follower nodes ultimately and uniformly converge to the leader node with bounded residual errors. The leader node, modeled as a higher-order non-autonomous nonlinear system, acts as a command generator giving commands only to a small portion of the networked follower nodes. Extended state observer is used to estimate the local states and lumped disturbance of each follower node. Moreover, each distributed controller can work independently only requiring the relative states and/or the estimated relative states information between itself and its neighbors. Finally an engineering application of multi flight simulators systems is demonstrated to test and verify the effectiveness of the proposed algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking themore » system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using advanced metering infrastructure and other distribution-level measurements to create a three-phase, unbalanced distribution state estimation approach. With distribution-level state estimation, the grid can be operated more efficiently, and outages or equipment failures can be caught faster, improving the overall resilience and reliability of the grid.« less
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe
2017-04-21
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency
Abu Bakr, Muhammad; Lee, Sukhan
2017-01-01
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035
Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procacci, Piero
2015-04-21
In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less
Charge state distribution of 86Kr in hydrogen and helium gas charge strippers at 2.7 MeV /nucleon
NASA Astrophysics Data System (ADS)
Kuboki, H.; Okuno, H.; Hasebe, H.; Fukunishi, N.; Ikezawa, E.; Imao, H.; Kamigaito, O.; Kase, M.
2014-12-01
The charge state distributions of krypton (86Kr) with an energy of 2.7 MeV /nucleon were measured using hydrogen (H2 ) and helium (He) gas charge strippers. A differential pumping system was constructed to confine H2 and He gases to a thickness sufficient for the charge state distributions to attain equilibrium. The mean charge states of 86Kr in H2 and He gases attained equilibrium at 25.1 and 23.2, respectively, whereas the mean charge state in N2 gas at equilibrium was estimated to be less than 20. The charge distributions are successfully reproduced by the cross sections of ionization and electron capture processes optimized by a fitting procedure.
[Estimated mammogram coverage in Goiás State, Brazil].
Corrêa, Rosangela da Silveira; Freitas-Júnior, Ruffo; Peixoto, João Emílio; Rodrigues, Danielle Cristina Netto; Lemos, Maria Eugênia da Fonseca; Marins, Lucy Aparecida Parreira; Silveira, Erika Aparecida da
2011-09-01
This cross-sectional study aimed to estimate mammogram coverage in the State of Goiás, Brazil, describing the supply, demand, and variations in different age groups, evaluating 98 mammography services as observational units. We estimated the mammogram rates by age group and type of health service, as well as the number of tests required to cover 70% and 100% of the target population. We assessed the association between mammograms, geographical distribution of mammography machines, type of service, and age group. Full coverage estimates, considering 100% of women in the 40-69 and 50-69-year age brackets, were 61% and 66%, of which the Brazilian Unified National Health System provided 13% and 14%, respectively. To achieve 70% coverage, 43,424 additional mammograms would be needed. All the associations showed statistically significant differences (p < 0.001). We conclude that mammogram coverage is unevenly distributed in the State of Goiás and that fewer tests are performed than required.
Russell, Robin E.; Tinsley, Karl; Erickson, Richard A.; Thogmartin, Wayne E.; Jennifer A. Szymanski,
2014-01-01
Depicting the spatial distribution of wildlife species is an important first step in developing management and conservation programs for particular species. Accurate representation of a species distribution is important for predicting the effects of climate change, land-use change, management activities, disease, and other landscape-level processes on wildlife populations. We developed models to estimate the spatial distribution of little brown bat (Myotis lucifugus) wintering populations in the United States east of the 100th meridian, based on known hibernacula locations. From this data, we developed several scenarios of wintering population counts per county that incorporated uncertainty in the spatial distribution of the hibernacula as well as uncertainty in the size of the current little brown bat population. We assessed the variability in our results resulting from effects of uncertainty. Despite considerable uncertainty in the known locations of overwintering little brown bats in the eastern United States, we believe that models accurately depicting the effects of the uncertainty are useful for making management decisions as these models are a coherent organization of the best available information.
Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.
Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji
2010-02-01
Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.
ERIC Educational Resources Information Center
Bennett, Daniel L.
2011-01-01
This paper estimates historical measures of equality in the distribution of education in the United States by age group and sex. Using educational attainment data for the population, the EduGini measure indicates that educational inequality in the U.S. declined significantly between 1950 and 2009. Reductions in educational inequality were more…
Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tegen, S.
This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
NASA Astrophysics Data System (ADS)
Morales, Roberto; Barriga-Carrasco, Manuel D.; Casas, David
2017-04-01
The instantaneous charge state of uranium ions traveling through a fully ionized hydrogen plasma has been theoretically studied and compared with one of the first energy loss experiments in plasmas, carried out at GSI-Darmstadt by Hoffmann et al. in the 1990s. For this purpose, two different methods to estimate the instantaneous charge state of the projectile have been employed: (1) rate equations using ionization and recombination cross sections and (2) equilibrium charge state formulas for plasmas. Also, the equilibrium charge state has been obtained using these ionization and recombination cross sections and compared with the former equilibrium formulas. The equilibrium charge state of projectiles in plasmas is not always reached, and it depends mainly on the projectile velocity and the plasma density. Therefore, a non-equilibrium or an instantaneous description of the projectile charge is necessary. The charge state of projectile ions cannot be measured, except after exiting the target, and experimental data remain very scarce. Thus, the validity of our charge state model is checked by comparing the theoretical predictions with an energy loss experiment, as the energy loss has a generally quadratic dependence on the projectile charge state. The dielectric formalism has been used to calculate the plasma stopping power including the Brandt-Kitagawa (BK) model to describe the charge distribution of the projectile. In this charge distribution, the instantaneous number of bound electrons instead of the equilibrium number has been taken into account. Comparing our theoretical predictions with experiments, it is shown the necessity of including the instantaneous charge state and the BK charge distribution for a correct energy loss estimation. The results also show that the initial charge state has a strong influence in order to estimate the energy loss of the uranium ions.
Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors
NASA Astrophysics Data System (ADS)
da Silva, Andre F. C.; Colonius, Tim
2017-11-01
The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Towards a systematic approach to comparing distributions used in flood frequency analysis
NASA Astrophysics Data System (ADS)
Bobée, B.; Cavadias, G.; Ashkar, F.; Bernier, J.; Rasmussen, P.
1993-02-01
The estimation of flood quantiles from available streamflow records has been a topic of extensive research in this century. However, the large number of distributions and estimation methods proposed in the scientific literature has led to a state of confusion, and a gap prevails between theory and practice. This concerns both at-site and regional flood frequency estimation. To facilitate the work of "hydrologists, designers of hydraulic structures, irrigation engineers and planners of water resources", the World Meteorological Organization recently published a report which surveys and compares current methodologies, and recommends a number of statistical distributions and estimation procedures. This report is an important step towards the clarification of this difficult topic, but we think that it does not effectively satisfy the needs of practitioners as intended, because it contains some statements which are not statistically justified and which require further discussion. In the present paper we review commonly used procedures for flood frequency estimation, point out some of the reasons for the present state of confusion concerning the advantages and disadvantages of the various methods, and propose the broad lines of a possible comparison strategy. We recommend that the results of such comparisons be discussed in an international forum of experts, with the purpose of attaining a more coherent and broadly accepted strategy for estimating floods.
A simulation of probabilistic wildfire risk components for the continental United States
Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short
2011-01-01
This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...
Pandiselvi, S; Raja, R; Cao, Jinde; Rajchakit, G; Ahmad, Bashir
2018-01-01
This work predominantly labels the problem of approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays. Here we design a linear estimator in such a way that the absorption of mRNA and protein can be approximated via known measurement outputs. By utilizing a Lyapunov-Krasovskii functional and some stochastic analysis execution, we obtain the stability formula of the estimation error systems in the structure of linear matrix inequalities under which the estimation error dynamics is robustly exponentially stable. Further, the obtained conditions (in the form of LMIs) can be effortlessly solved by some available software packages. Moreover, the specific expression of the desired estimator is also shown in the main section. Finally, two mathematical illustrative examples are accorded to show the advantage of the proposed conceptual results.
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Relaxation of ferroelectric states in 2D distributions of quantum dots: EELS simulation
NASA Astrophysics Data System (ADS)
Cortés, C. M.; Meza-Montes, L.; Moctezuma, R. E.; Carrillo, J. L.
2016-06-01
The relaxation time of collective electronic states in a 2D distribution of quantum dots is investigated theoretically by simulating EELS experiments. From the numerical calculation of the probability of energy loss of an electron beam, traveling parallel to the distribution, it is possible to estimate the damping time of ferroelectric-like states. We generate this collective response of the distribution by introducing a mean field interaction among the quantum dots, and then, the model is extended incorporating effects of long-range correlations through a Bragg-Williams approximation. The behavior of the dielectric function, the energy loss function, and the relaxation time of ferroelectric-like states is then investigated as a function of the temperature of the distribution and the damping constant of the electronic states in the single quantum dots. The robustness of the trends and tendencies of our results indicate that this scheme of analysis can guide experimentalists to develop tailored quantum dots distributions for specific applications.
Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.
Human variability in mercury toxicokinetics and steady state biomarker ratios.
Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M
2000-10-01
Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.
Epidemiology of Mental Retardation.
ERIC Educational Resources Information Center
Heber, Rick
Prevalence data on mental retardation is presented including international estimates on general prevalence, age directions, geographical variations within the United States, racial and ethnic variations, economic class distributions, family variations, and population distribution in institutions. Statistics are also provided in areas of specific…
Martin A. Spetich; Zhaofei Fan; Zhen Sui; Michael Crosby; Hong S. He; Stephen R. Shifley; Theodor D. Leininger; W. Keith Moser
2017-01-01
Stresses to trees under a changing climate can lead to changes in forest tree survival, mortality and distribution. For instance, a study examining the effects of human-induced climate change on forest biodiversity by Hansen and others (2001) predicted a 32% reduction in loblollyâshortleaf pine habitat across the eastern United States. However, they also...
Sexual differentiation in the distribution potential of northern jaguars (Panthera onca)
Erin E. Boydston; Carlos A. Lopez Gonzalez
2005-01-01
We estimated the potential geographic distribution of jaguars in the southwestern United States and northwestern Mexico by modeling the jaguar ecological niche from occurrence records. We modeled separately the distributions of males and females, assuming records of females probably represented established home ranges while male records likely included dispersal...
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
An overview of distributed microgrid state estimation and control for smart grids.
Rana, Md Masud; Li, Li
2015-02-12
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Mao, Chen-Chen; Zhou, Xing-Yu; Zhu, Jian-Rong; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin
2018-05-14
Recently Zhang et al [ Phys. Rev. A95, 012333 (2017)] developed a new approach to estimate the failure probability for the decoy-state BB84 QKD system when taking finite-size key effect into account, which offers security comparable to Chernoff bound, while results in an improved key rate and transmission distance. Based on Zhang et al's work, now we extend this approach to the case of the measurement-device-independent quantum key distribution (MDI-QKD), and for the first time implement it onto the four-intensity decoy-state MDI-QKD system. Moreover, through utilizing joint constraints and collective error-estimation techniques, we can obviously increase the performance of practical MDI-QKD systems compared with either three- or four-intensity decoy-state MDI-QKD using Chernoff bound analysis, and achieve much higher level security compared with those applying Gaussian approximation analysis.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
Practical decoy state for quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Xiongfeng; Qi Bing; Zhao Yi
2005-07-15
Decoy states have recently been proposed as a useful method for substantially improving the performance of quantum key distribution (QKD). Here, we present a general theory of the decoy state protocol based on only two decoy states and one signal state. We perform optimization on the choice of intensities of the two decoy states and the signal state. Our result shows that a decoy state protocol with only two types of decoy states - the vacuum and a weak decoy state - asymptotically approaches the theoretical limit of the most general type of decoy state protocol (with an infinite numbermore » of decoy states). We also present a one-decoy-state protocol. Moreover, we provide estimations on the effects of statistical fluctuations and suggest that, even for long-distance (larger than 100 km) QKD, our two-decoy-state protocol can be implemented with only a few hours of experimental data. In conclusion, decoy state quantum key distribution is highly practical.« less
Uncertainty in gridded CO 2 emissions estimates
Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...
2016-05-19
We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less
Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Cost of schizophrenia: direct costs and use of resources in the State of São Paulo.
Leitão, Raquel Jales; Ferraz, Marcos Bosi; Chaves, Ana Cristina; Mari, Jair J
2006-04-01
To estimate the direct costs of schizophrenia for the public sector. A study was carried out in the state of São Paulo, Brazil, during 1998. Data from the medical literature and governmental research bodies were gathered for estimating the total number of schizophrenia patients covered by the Brazilian Unified Health System. A decision tree was built based on an estimated distribution of patients under different types of psychiatric care. Medical charts from public hospitals and outpatient services were used to estimate the resources used over a one-year period. Direct costs were calculated by attributing monetary values for each resource used. Of all patients, 81.5% were covered by the public sector and distributed as follows: 6.0% in psychiatric hospital admissions, 23.0% in outpatient care, and 71.0% without regular treatment. The total direct cost of schizophrenia was US $191,781,327 (2.2% of the total health care expenditure in the state). Of this total, 11.0% was spent on outpatient care and 79.2% went for inpatient care. Most schizophrenia patients in the state of São Paulo receive no regular treatment. The study findings point out to the importance of investing in research aimed at improving the resource allocation for the treatment of mental disorders in Brazil.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
An algorithm to estimate PBL heights from wind profiler data
NASA Astrophysics Data System (ADS)
Molod, A.; Salmun, H.
2016-12-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourlyarchived wind profiler data from the NOAA Profiler Network (NPN) sites located throughoutthe central United States from the period 1992-2012. The long period of record allows ananalysis of climatological mean PBL heights as well as some estimates of year to yearvariability. Under clear conditions, summertime averaged hourly time series of PBL heightscompare well with Richardson-number based estimates at the few NPN stations with hourlytemperature measurements. Comparisons with clear sky MERRA estimates show that the windprofiler (WP) and the Richardson number based PBL heights are lower by approximately 250-500 m.The geographical distribution of daily maximum WP PBL heights corresponds well with theexpected distribution based on patterns of surface temperature and soil moisture. Windprofiler PBL heights were also estimated under mostly cloudy conditions, but the WP estimatesshow a smaller clear-cloudy condition difference than either of the other two PBL height estimates.The algorithm presented here is shown to provide a reliable summer, fall and springclimatology of daytime hourly PBL heights throughout the central United States. The reliabilityof the algorithm has prompted its use to obtain hourly PBL heights from other archived windprofiler data located throughout the world.
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, J.
Based on a compilation of three estimation approaches, the total nationwide population of wild pigs in the United States numbers approximately 6.3 million animals, with that total estimate ranging from 4.4 up to 11.3 million animals. The majority of these numbers (99 percent), which were encompassed by ten states (i.e., Alabama, Arkansas, California, Florida, Georgia, Louisiana, Mississippi, Oklahoma, South Carolina and Texas), were based on defined estimation methodologies (e.g., density estimates correlated to the total potential suitable wild pig habitat statewide, statewide harvest percentages, statewide agency surveys regarding wild pig distribution and numbers). In contrast to the pre-1990 estimates, nonemore » of these more recent efforts, collectively encompassing 99 percent of the total, were based solely on anecdotal information or speculation. To that end, one can defensibly state that the wild pigs found in the United States number in the millions of animals, with the nationwide population estimated to arguably vary from about four million up to about eleven million individuals.« less
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Estimating tree cavity distributions from historical FIA data
Mark D. Nelson; Charlotte Roy
2012-01-01
Tree cavities provide important habitat features for a variety of wildlife species. We describe an approach for using historical FIA data to estimate the number of trees containing cavities during the 1990s in seven states of the Upper Midwest. We estimated a total of 280 million cavity-containing trees. Iowa and Missouri had the highest percentages of cavity-...
Some analytical models to estimate maternal age at birth using age-specific fertility rates.
Pandey, A; Suchindran, C M
1995-01-01
"A class of analytical models to study the distribution of maternal age at different births from the data on age-specific fertility rates has been presented. Deriving the distributions and means of maternal age at birth of any specific order, final parity and at next-to-last birth, we have extended the approach to estimate parity progression ratios and the ultimate parity distribution of women in the population.... We illustrate computations of various components of the model expressions with the current fertility experiences of the United States for 1970." excerpt
Public School Finance Programs, 1971-72. (States, District of Columbia, and Outlying Areas).
ERIC Educational Resources Information Center
Johns, Thomas L., Comp.
This publication describes State funds transmitted to local agencies for the support of elementary and secondary education. Each distribution identified as a separate fund by the State is described in terms of (1) title, (2) legal citation, (3) appropriation for the school year or estimate, (4) percentage of total State funds transmitted, (5)…
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
DOT National Transportation Integrated Search
2001-07-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
DOT National Transportation Integrated Search
2000-08-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-12-01
Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.
Rana, Md Masud
2017-01-01
This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.
Estimating release of carbon from 1990 and 1991 forest fires in Alaska
NASA Technical Reports Server (NTRS)
Kaisischke, Eric S.; French, Nancy H. F.; Bourgeau-Chavez, Laura L.; Christensen, N. L., Jr.
1995-01-01
An improved method to estimate the amounts of carbon released during fires in the boreal forest zone of Alaska in 1990 and 1991 is described. This method divides the state into 64 distinct physiographic regions and estimates areal extent of five different land covers: two forest types, peat land, tundra, and nonvegetated. The areal extent of each cover type was estimated from a review of topographic maps of each region and observations on the distribution of foreat types within the state. Using previous observations and theoretical models for the two forest types found in interior Alaska, models of biomass accumulation as a function of stand age were developed. Stand age distributions for each region were determined using a statistical distribution based on fire frequency, which was from available long-term historical records. Estimates of the degree of biomass combusted were based on recent field observations as well as research reported in the literature. The location and areal extent of fires in this region for 1990 and 1991 were based on both field observations and analysis of satellite (advanced very high resolution radiometer (AVHRR)) data sets. Estimates of average carbon release for the two study years ranged between 2.54 and 3.00 kg/sq m, which are 2.2 to 2.6 times greater than estimates used in other studies of carbon release through biomass burning in boreal forests. Total average annual carbon release for the two years ranged between 0.012 and 0.018 Pg C/yr, with the lower value resulting from the AVHRR estimates of fire location and area.
Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data
NASA Technical Reports Server (NTRS)
Molod, Andrea M.; Salmun, H.; Dempsey, M
2015-01-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
Population ecology of the mallard: VII. Distribution and derivation of the harvest
Munro, Robert E.; Kimball, Charles F.
1982-01-01
This is the seventh in a series of comprehensive reports on population ecology of the mallard (Anas platyrhynchos) in North America. Banding records for 1961-1975 were used, together with information from previous reports in this series, to estimate annual and average preseason age and sex structure of the mallard population and patterns of harvest distribution and derivation. Age ratios in the pre-season population averaged 0.98 immatures per adult and ranged from 0.75 to 1.44. The adult male per female ration averaged 1.42. The young male per female ratio average 1.01. Geographic and annual differences in recovery distributions were associated with age, sex, and years after banding. Such variation might indicate that survival or band recovery rates, or both, change as a function of number of years after banding, and that estimates of these rates might thus be affected. Distribution of the mallard harvest from 16 major breeding ground reference areas to States, Provinces, and flyways is tabulated and illustrated. Seasonal (weekly) breeding ground derivation of the harvest within States and Provinces from the 16 reference areas also is tabulated. Harvest distributions, derivation, and similarity of derivation between harvest areas are summarily illustrated with maps. Derivation of harvest appears to be consistent throughout the hunting season in the middle and south central United States, encompassing States in both the Central and Mississippi flyways. However, weekly derivation patterns for most northern States suggest that early dates of hunting result in relatively greater harvest of locally derived mallard, in contrast to birds from more northern breeding areas.
Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.
2018-01-01
The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare species and provide tools for planning marten recovery in the northeastern United States.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.
Zhao, Wei; Wang, Han
2016-06-28
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation
Zhao, Wei; Wang, Han
2016-01-01
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages. PMID:27367691
Public School Finance Programs, 1975-76.
ERIC Educational Resources Information Center
Tron, Esther O., Comp.
This publication describes state funds transmitted to local agencies for the support of elementary and secondary education. Each distribution identified as a separate fund by the state is described in terms of (1) title, (2) legal citation, (3) appropriation for the school year or estimate, (4) percentage of total state funds transmitted, (5)…
The κ-generalized distribution: A new descriptive model for the size distribution of incomes
NASA Astrophysics Data System (ADS)
Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.
2008-05-01
This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Secure detection in quantum key distribution by real-time calibration of receiver
NASA Astrophysics Data System (ADS)
Marøy, Øystein; Makarov, Vadim; Skaar, Johannes
2017-12-01
The single-photon detectionefficiency of the detector unit is crucial for the security of common quantum key distribution protocols like Bennett-Brassard 1984 (BB84). A low value for the efficiency indicates a possible eavesdropping attack that exploits the photon receiver’s imperfections. We present a method for estimating the detection efficiency, and calculate the corresponding secure key generation rate. The estimation is done by testing gated detectors using a randomly activated photon source inside the receiver unit. This estimate gives a secure rate for any detector with non-unity single-photon detection efficiency, both inherit or due to blinding. By adding extra optical components to the receiver, we make sure that the key is extracted from photon states for which our estimate is valid. The result is a quantum key distribution scheme that is secure against any attack that exploits detector imperfections.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
An Automated Technique for Estimating Daily Precipitation over the State of Virginia
NASA Technical Reports Server (NTRS)
Follansbee, W. A.; Chamberlain, L. W., III
1981-01-01
Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.
A comparison of foetal and infant mortality in the United States and Canada.
Ananth, Cande V; Liu, Shiliang; Joseph, K S; Kramer, Michael S
2009-04-01
Infant mortality rates are higher in the United States than in Canada. We explored this difference by comparing gestational age distributions and gestational age-specific mortality rates in the two countries. Stillbirth and infant mortality rates were compared for singleton births at >or=22 weeks and newborns weighing>or=500 g in the United States and Canada (1996-2000). Since menstrual-based gestational age appears to misclassify gestational duration and overestimate both preterm and postterm birth rates, and because a clinical estimate of gestation is the only available measure of gestational age in Canada, all comparisons were based on the clinical estimate. Data for California were excluded because they lacked a clinical estimate. Gestational age-specific comparisons were based on the foetuses-at-risk approach. The overall stillbirth rate in the United States (37.9 per 10,000 births) was similar to that in Canada (38.2 per 10,000 births), while the overall infant mortality rate was 23% (95% CI 19-26%) higher (50.8 vs 41.4 per 10,000 births, respectively). The gestational age distribution was left-shifted in the United States relative to Canada; consequently, preterm birth rates were 8.0 and 6.0%, respectively. Stillbirth and early neonatal mortality rates in the United States were lower at term gestation only. However, gestational age-specific late neonatal, post-neonatal and infant mortality rates were higher in the United States at virtually every gestation. The overall stillbirth rates (per 10,000 foetuses at risk) among Blacks and Whites in the United States, and in Canada were 59.6, 35.0 and 38.3, respectively, whereas the corresponding infant mortality rates were 85.6, 49.7 and 42.2, respectively. Differences in gestational age distributions and in gestational age-specific stillbirth and infant mortality in the United States and Canada underscore substantial differences in healthcare services, population health status and health policy between the two neighbouring countries.
Optimal estimation of entanglement in optical qubit systems
NASA Astrophysics Data System (ADS)
Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander P.
2011-05-01
We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.
Population size and winter distribution of eastern American oystercatchers
Brown, S.C.; Schulte, Shiloh A.; Harrington, B.; Winn, Brad; Bart, J.; Howe, M.
2005-01-01
Conservation of the eastern subspecies of the American oystercatcher (Haematopus palliatus palliatus) is a high priority in the U.S. Shorebird Conservation Plan, but previous population estimates were unreliable, information on distribution and habitat associations during winter was incomplete, and methods for long-term monitoring had not been developed prior to this survey. We completed the aerial survey proposed in the U.S. Shorebird Conservation Plan to determine population size, winter distribution, and habitat associations. We conducted coastal aerial surveys from New Jersey to Texas during November 2002 to February 2003. This area comprised the entire wintering range of the eastern American oystercatcher within the United States. Surveys covered all suitable habitat in the United States for the subspecies, partitioned into 3 survey strata: known roost sites, high-use habitat, and inter-coastal tidal habitat. We determined known roost sites from extensive consultation with biologists and local experts in each state. High-use habitat included sand islands, sand spits at inlets, shell rakes, and oyster reefs. Partner organizations conducted ground counts in most states. We used high resolution still photography to determine detection rates for estimates of the number of birds in particular flocks, and we used ground counts to determine detection rates of flocks. Using a combination of ground and aerial counts, we estimated the population of eastern American oystercatchers to be 10,971 +/- 298. Aerial surveys can serve an important management function for shorebirds and possibly other coastal waterbirds by providing population status and trend information across a wide geographic scale.
The Impact of Immigration on Congressional Representation.
ERIC Educational Resources Information Center
Bouvier, Leon
Explanation of shifts in U.S. Congressional representation among states have often overlooked the effects of international migration on the size and distribution of the U.S. population. Seventy percent of recent U.S. immigrants have settled in California, New York, Texas, Florida, New Jersey, and Illinois. Estimates of the distribution of…
Defining conservation priorities using fragmentation forecasts
David Wear; John Pye; Kurt H. Riitters
2004-01-01
Methods are developed for forecasting the effects of population and economic growth on the distribution of interior forest habitat. An application to the southeastern United States shows that models provide significant explanatory power with regard to the observed distribution of interior forest. Estimates for economic and biophysical variables are significant and...
An Overview of Distributed Microgrid State Estimation and Control for Smart Grids
Rana, Md Masud; Li, Li
2015-01-01
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316
2017-01-01
This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations. PMID:28459848
NASA Astrophysics Data System (ADS)
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Isonymy structure of Sucre and Táchira, two Venezuelan states.
Rodríguez-Larralde, A; Barrai, I
1997-10-01
The isonymy structure of two Venezuelan states, Sucre and Táchira, is described using the surnames of the Register of Electors updated in 1991. The frequency distribution of surnames pooled together by sex was obtained for the 57 counties of Sucre and the 52 counties of Táchira, based on total population sizes of 158,705 and 160,690 individuals, respectively. The coefficient of consanguinity resulting from random isonymy (phi ii), Karlin and McGregor's ni (identical to v), and the proportion of the population included in surnames represented only once (estimator A) and in the seven most frequent surnames (estimator B) were calculated for each county. RST, a measure of microdifferentiation, was estimated for each state. The Euclidean distance between pairs of counties within states was calculated together with the corresponding geographic distances. The correlations between their logarithmic transformations were significant in both cases, indicating differentiation of surnames by distance. Dendrograms based on the Euclidean distance matrix were constructed. From them a first approximation of the effect of internal migration within states was obtained. Ninety-six percent of the coefficient of consanguinity resulting from random isonymy is determined by the proportion of the population included in the seven most frequent surnames, whereas between 72% and 88% of Karlin and McGregor's ni for Sucre and Táchira, respectively, is determined by the proportion of population included in surnames represented only once. Surnames with generalized and with focal distribution were identified for both states, to be used as possible indicators of the geographic origin of their carriers. Our results indicate that Táchira's counties, on average, tend to be more isolated than Sucre's counties, as measured by RST, estimator B, and phi ii. Comparisons with the results obtained for other. Venezuelan states and other non-Venezuelan populations are also given.
Design of a two-level power system linear state estimator
NASA Astrophysics Data System (ADS)
Yang, Tao
The availability of synchro-phasor data has raised the possibility of a linear state estimator if the inputs are only complex currents and voltages and if there are enough such measurements to meet observability and redundancy requirements. Moreover, the new digital substations can perform some of the computation at the substation itself resulting in a more accurate two-level state estimator. The objective of this research is to develop a two-level linear state estimator processing synchro-phasor data and estimating the states at both the substation level and the control center level. Both the mathematical algorithms that are different from those in the present state estimation procedure and the layered architecture of databases, communications and application programs that are required to support this two-level linear state estimator are described in this dissertation. Besides, as the availability of phasor measurements at substations will increase gradually, this research also describes how the state estimator can be enhanced to handle both the traditional state estimator and the proposed linear state estimator simultaneously. This provides a way to immediately utilize the benefits in those parts of the system where such phasor measurements become available and provides a pathway to transition to the smart grid of the future. The design procedure of the two-level state estimator is applied to two study systems. The first study system is the IEEE-14 bus system. The second one is the 179 bus Western Electricity Coordinating Council (WECC) system. The static database for the substations is constructed from the power flow data of these systems and the real-time measurement database is produced by a power system dynamic simulating tool (TSAT). Time-skew problems that may be caused by communication delays are also considered and simulated. We used the Network Simulator (NS) tool to simulate a simple communication system and analyse its time delay performance. These time delays were too small to affect the results especially since the measurement data is time-stamped and the state estimator for these small systems could be run with subseconf frequency. Keywords: State Estimation, Synchro-Phasor Measurement, Distributed System, Energy Control Center, Substation, Time-skew
Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal
2015-05-01
For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ohta, Akio; Truyen, Nguyen Xuan; Fujimura, Nobuyuki; Ikeda, Mitsuhisa; Makihara, Katsunori; Miyazaki, Seiichi
2018-06-01
The energy distribution of the electronic state density of wet-cleaned epitaxial GaN surfaces and SiO2/GaN structures has been studied by total photoelectron yield spectroscopy (PYS). By X-ray photoelectron spectroscopy (XPS) analysis, the energy band diagram for a wet-cleaned epitaxial GaN surface such as the energy level of the valence band top and electron affinity has been determined to obtain a better understanding of the measured PYS signals. The electronic state density of GaN surface with different carrier concentrations in the energy region corresponding to the GaN bandgap has been evaluated. Also, the interface defect state density of SiO2/GaN structures was also estimated by not only PYS analysis but also capacitance–voltage (C–V) characteristics. We have demonstrated that PYS analysis enables the evaluation of defect state density filled with electrons at the SiO2/GaN interface in the energy region corresponding to the GaN midgap, which is difficult to estimate by C–V measurement of MOS capacitors.
DOT National Transportation Integrated Search
2013-11-01
TEA 21 (Transportation Equity Act 21) of 1998 allows heavy sugarcane truck loads on Louisiana interstate highways. These heavier loads are currently being : applied to state and parish roads through trucks traveling from and to the processing plants....
AGRICULTURAL AMMONIA EMISSIONS AND AMMONIUM DEPOSITION IN THE SOUTHEAST UNITED STATES
The paper gives an estimate of county-scale annual ammonia (NH3) emissions in eight Southeastern States for the year 1997, using emission factors and activity data for all domestic livestock and fertilizer sources. A geographical distribution of the data yields local areas (1000...
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Occupancy estimation and modeling with multiple states and state uncertainty
Nichols, J.D.; Hines, J.E.; MacKenzie, D.I.; Seamans, M.E.; Gutierrez, R.J.
2007-01-01
The distribution of a species over space is of central interest in ecology, but species occurrence does not provide all of the information needed to characterize either the well-being of a population or the suitability of occupied habitat. Recent methodological development has focused on drawing inferences about species occurrence in the face of imperfect detection. Here we extend those methods by characterizing occupied locations by some additional state variable ( e. g., as producing young or not). Our modeling approach deals with both detection probabilities,1 and uncertainty in state classification. We then use the approach with occupancy and reproductive rate data from California Spotted Owls (Strix occidentalis occidentalis) collected in the central Sierra Nevada during the breeding season of 2004 to illustrate the utility of the modeling approach. Estimates of owl reproductive rate were larger than naive estimates, indicating the importance of appropriately accounting for uncertainty in detection and state classification.
Assessment of Moderate- and High-Temperature Geothermal Resources of the United States
Williams, Colin F.; Reed, Marshall J.; Mariner, Robert H.; DeAngelo, Jacob; Galanis, S. Peter
2008-01-01
Scientists with the U.S. Geological Survey (USGS) recently completed an assessment of our Nation's geothermal resources. Geothermal power plants are currently operating in six states: Alaska, California, Hawaii, Idaho, Nevada, and Utah. The assessment indicates that the electric power generation potential from identified geothermal systems is 9,057 Megawatts-electric (MWe), distributed over 13 states. The mean estimated power production potential from undiscovered geothermal resources is 30,033 MWe. Additionally, another estimated 517,800 MWe could be generated through implementation of technology for creating geothermal reservoirs in regions characterized by high temperature, but low permeability, rock formations.
Supply, distribution, and capacity of optometrists in Indiana.
Marshall, E C
2000-05-01
The Indiana Optometric Association and the Indiana Health Care Professional Development Commission identified a need to collect and analyze data on the health professions workforce for formulating goals and strategies to accommodate demands for health care services in Indiana. This study looks at the supply, distribution, and services of optometrists practicing in Indiana. Data compiled by the Indiana State Department of Health, Indiana Health Care Development Commission, and the Project HOPE Center for Health Affairs were analyzed with the results of a survey of practitioner members of the Indiana Optometric Association. Supply, distribution, services, provider-to-population ratios, per capita demand, and optometric productivity were used to evaluate the current and future capacity of Indiana optometrists to the year 2010. An estimated 893 optometrists practiced in 86 of 92 counties and comprised 77% of the state's licensed eye and vision care workforce in 1995. Optometric workforce capacity appeared to be related to county population, but unrelated to the urban/rural classification or the per-capita income of Indiana counties. Contact lenses, disease, geriatrics, and pediatrics were the most prevalent areas of practice specialty. Optometrist capacity in Indiana is sufficient at both the state and county levels, and optometric services are appropriately distributed such that patient access to optometric care is geographically unburdened. Estimates regarding supply are elastic, depending on the assumptions applied.
Biomarker measurements are used in three ways: 1) evaluating the time course and distribution of a chemical in the body, 2) estimating previous exposure or dose, and 3) assessing disease state. Blood and urine measurements are the primary methods employed. Of late, it has been ...
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2017-01-01
Student Growth Percentiles (SGPs) increasingly are being used in the United States for inferences about student achievement growth and educator effectiveness. Emerging research has indicated that SGPs estimated from observed test scores have large measurement errors. As such, little is known about "true" SGPs, which are defined in terms…
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Perspectives / The Uphill Climb
ERIC Educational Resources Information Center
Scherer, Marge
2013-01-01
The video "Wealth Inequality in America" sets forth a compelling animated chart depicting three ideas: (1) how Americans think wealth is distributed in the United States;(2) how they believe wealth ideally should be distributed; and (3) how the estimated $54 trillion of U.S. wealth is really divided. The hard-to-miss conclusion of the…
Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.
Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun
2014-01-01
In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.
Distributed finite-time containment control for double-integrator multiagent systems.
Wang, Xiangyu; Li, Shihua; Shi, Peng
2014-09-01
In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.
DOT National Transportation Integrated Search
2013-11-01
The bridge in this study was evaluated and a monitoring system was installed to investigate the effects of heavy loads and : the cost of fatigue for bridges on state highways in Louisiana. Also, this study is used to respond to Louisiana Senate : Con...
Load distribution and fatigue cost estimates of heavy truck loads on Louisiana state bridges.
DOT National Transportation Integrated Search
2013-11-01
The bridge in this study was evaluated and a monitoring system was installed to investigate the effects of heavy loads and : the cost of fatigue for bridges on state highways in Louisiana. Also, this study is used to respond to Louisiana Senate : Con...
Mallard harvest distributions in the Mississippi and Central Flyways
Green, A.W.; Krementz, D.G.
2008-01-01
The mallard (Anas platyrhynchos) is the most harvested duck in North America. A topic of debate among hunters, especially those in Arkansas, USA, is whether wintering distributions of mallards have changed in recent years. We examined distributions of mallards in the Mississippi (MF) and Central Flyways during hunting seasons 1980-2003 to determine if and why harvest distributions changed. We used Geographic Information Systems to analyze spatial distributions of band recoveries and harvest estimated using data from the United States Fish and Wildlife Service Parts Collection Survey. Mean latitudes of band recoveries and harvest estimates showed no significant trends across the study period. Despite slight increases in band recoveries and harvest on the peripheries of kernel density estimates, most harvest occurred in eastern Arkansas and northwestern Mississippi, USA, in all years. We found no evidence for changes in the harvest distributions of mallards. We believe that the late 1990s were years of exceptionally high harvest in the lower MF and that slight shifts northward since 2000 reflect a return to harvest distributions similar to those of the early 1980s. Our results provide biologists with possible explanations to hunter concerns of fewer mallards available for harvest.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Han, Fang; Liu, Han
2017-02-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
Attanasi, E.D.; Freeman, P.A.
2016-03-02
The retention factor is the percentage of injected CO2 that is naturally retained in the reservoir. Retention factors were also estimated in this study. For clastic reservoirs, 90 percent of the estimated retention factors were between 21.7 and 32.1 percent, and for carbonate reservoirs, 90 percent were between 23.7 and 38.2 percent. The respective median values were 22.9 for clastic reservoirs and 26.1 for carbonate reservoirs. Both distributions were right skewed. The recovery and retention factors that were calculated are consistent with the corresponding factors reported in the literature.
HIGH-TEMPERATURE GEOTHERMAL RESOURCES IN HYDROTHERMAL CONVECTION SYSTEMS IN THE UNITED STATES.
Nathenson, Manuel
1983-01-01
The calculation of high-temperature geothermal resources ( greater than 150 degree C) in the United States has been done by estimating the temperature, area, and thickness of each identified system. These data, along with a general model for recoverability of geothermal energy and a calculation that takes account of the conversion of thermal energy to electricity, yielded an estimate of 23,000 MW//e for 30 years. The undiscovered component was estimated based on multipliers of the identified resource as either 72,000 or 127,000 MW//e for 30 years depending on the model chosen for the distribution of undiscovered energy as a function of temperature.
Extended analysis of the Trojan-horse attack in quantum key distribution
NASA Astrophysics Data System (ADS)
Vinay, Scott E.; Kok, Pieter
2018-04-01
The discrete-variable quantum key distribution protocols based on the 1984 protocol of Bennett and Brassard (BB84) are known to be secure against an eavesdropper, Eve, intercepting the flying qubits and performing any quantum operation on them. However, these protocols may still be vulnerable to side-channel attacks. We investigate the Trojan-horse side-channel attack where Eve sends her own state into Alice's apparatus and measures the reflected state to estimate the key. We prove that the separable coherent state is optimal for Eve among the class of multimode Gaussian attack states, even in the presence of thermal noise. We then provide a bound on the secret key rate in the case where Eve may use any separable state.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Plotnick, Robert D.; Romich, Jennifer; Thacker, Jennifer; Dunbar, Matthew
2011-01-01
This study contributes to the debate about tolls’ equity impacts by examining the potential economic costs of tolling for low-income and non-low-income households. Using data from the Puget Sound metropolitan region in Washington State and GIS methods to map driving routes from home to work, we examine car ownership and transportation patterns among low-income and non-low-income households. We follow standard practice of estimating tolls’ potential impact only on households with workers who would drive on tolled and non-tolled facilities. We then redo the analysis including broader groups of households. We find that the degree of regressivity is quite sensitive to the set of households included in the analysis. The results suggest that distributional analyses of tolls should estimate impacts on all households in the relevant region in addition to impacts on just users of roads that are currently tolled or likely to be tolled. PMID:21818172
Hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.-H.; Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100080; Wang, J.-X.
2004-12-01
Adopting the complete {alpha}{sub s}{sup 4} approach of the perturbative QCD and the updated parton distribution functions, we have estimated the hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*). In the estimate, special care has been paid to the dependence of the production amplitude on the derivative of the wave function at origin which is obtained by the potential model. For experimental references, main theoretical uncertainties are discussed, and the total cross section as well as the distributions of the production with reasonable cuts at the energies of Tevatron and CERN LHC are computed and presented properly.more » The results show that the P-wave production may contribute to the B{sub c}-meson production indirectly by a factor of about 0.5 of the direct production, and according to the estimated cross section, it is further worthwhile to study the possibility of observing the P-wave production itself experimentally.« less
Extracting volatility signal using maximum a posteriori estimation
NASA Astrophysics Data System (ADS)
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
NASA Technical Reports Server (NTRS)
Halpern, D.; Fu, L.; Knauss, W.; Pihos, G.; Brown, O.; Freilich, M.; Wentz, F.
1995-01-01
The following monthly mean global distributions for 1993 are presented with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States (U.S.) Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the Advanced Very High Resolution Radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) satellite; 10-m height wind speed and direction estimated from the Active Microwave Instrument (AMI) on the European Space Agency (ESA) European Remote Sensing (ERS-1) satellite; sea surface height estimated from the joint U.S.-France Topography Experiment (TOPEX)/POSEIDON spacecraft; and 10-m height wind speed and direction produced by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of annual mean, monthly mean, and sampling distributions are displayed.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.
Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing
2018-03-07
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance
Zheng, Binqi; Yuan, Xiaobing
2018-01-01
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
Estimation for the Linear Model With Uncertain Covariance Matrices
NASA Astrophysics Data System (ADS)
Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat
2014-03-01
We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.
Neural correlates of the divergence of instrumental probability distributions.
Liljeholm, Mimi; Wang, Shuo; Zhang, June; O'Doherty, John P
2013-07-24
Flexible action selection requires knowledge about how alternative actions impact the environment: a "cognitive map" of instrumental contingencies. Reinforcement learning theories formalize this map as a set of stochastic relationships between actions and states, such that for any given action considered in a current state, a probability distribution is specified over possible outcome states. Here, we show that activity in the human inferior parietal lobule correlates with the divergence of such outcome distributions-a measure that reflects whether discrimination between alternative actions increases the controllability of the future-and, further, that this effect is dissociable from those of other information theoretic and motivational variables, such as outcome entropy, action values, and outcome utilities. Our results suggest that, although ultimately combined with reward estimates to generate action values, outcome probability distributions associated with alternative actions may be contrasted independently of valence computations, to narrow the scope of the action selection problem.
NASA Astrophysics Data System (ADS)
Gaidash, A. A.; Egorov, V. I.; Gleim, A. V.
2016-08-01
Quantum cryptography allows distributing secure keys between two users so that any performed eavesdropping attempt would be immediately discovered. However, in practice an eavesdropper can obtain key information from multi-photon states when attenuated laser radiation is used as a source of quantum states. In order to prevent actions of an eavesdropper, it is generally suggested to implement special cryptographic protocols, like decoy states or SARG04. In this paper, we describe an alternative method based on monitoring photon number statistics after detection. We provide a useful rule of thumb to estimate approximate order of difference of expected distribution and distribution in case of attack. Formula for calculating a minimum value of total pulses or time-gaps to resolve attack is shown. Also formulas for actual fraction of raw key known to Eve were derived. This method can therefore be used with any system and even combining with mentioned special protocols.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Upper-Bound Estimates Of SEU in CMOS
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1990-01-01
Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.
On sequential data assimilation for scalar macroscopic traffic flow models
NASA Astrophysics Data System (ADS)
Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel
2012-09-01
We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.
Conservation assessment of the Yazoo Darter (etheostoma raneyi)
Ken A. Sterling; Warren L. Jr. Warren; Henderson L.Gayle
2013-01-01
We summarized all known historical and contemporary data on the geographic distribution of Etheostoma raneyi (Yazoo Darter), a range-restricted endemic in the Little Tallahatchie and Yocona rivers (upper Yazoo River basin), MS. We identified federal and state land ownership in relation to the darterâs distribution and provided quantitative estimates of abundance of the...
Physician Dispensing of Oxycodone and Other Commonly Used Opioids, 2000-2015, United States.
Mack, Karin Ann; Jones, Christopher McCall; McClure, Roderick John
2018-05-01
An average of 91 people in the United States die every day from an opioid-related overdose (including prescription opioids and heroin). The direct dispensing of opioids from health care practitioner offices has been linked to opioid-related harms. The objective of this study is to describe the changing nature of the volume of this type of prescribing at the state level. This descriptive study examines the distribution of opioids by practitioners using 1999-2015 Automation of Reports and Consolidated Orders System data. Analyses were restricted to opioids distributed to practitioners. Amount distributed (morphine milligram equivalents [MMEs]) and number of practitioners are presented. Patterns of distribution to practitioners and the number of practitioners varied markedly by state and changed dramatically over time. Comparing 1999 with 2015, the MME distributed to dispensing practitioners decreased in 16 states and increased in 35. Most notable was the change in Florida, which saw a peak of 8.94 MMEs per 100,000 persons in 2010 (the highest distribution in all states in all years) and a low of 0.08 in 2013. This study presents the first state estimates of office-based dispensing of opioids. Increases in direct dispensing in recent years may indicate a need to monitor this practice and consider whether changes are needed. Using controlled substances data to identify high prescribers and dispensers of opioids, as well as examining overall state trends, is a foundational activity to informing the response to potentially high-risk clinical practices.
Ehrenfeld, Stephan; Butz, Martin V
2013-02-01
Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.
Anomalous dismeter distribution shifts estimated from FIA inventories through time
Francis A. Roesch; Paul C. Van Deusen
2010-01-01
In the past decade, the United States Department of Agriculture Forest Serviceâs Forest Inventory and Analysis Program (FIA) has replaced regionally autonomous, periodic, state-wide forest inventories using various probability proportional to tree size sampling designs with a nationally consistent annual forest inventory design utilizing systematically spaced clusters...
Gravity Field Characterization around Small Bodies
NASA Astrophysics Data System (ADS)
Takahashi, Yu
A small body rendezvous mission requires accurate gravity field characterization for safe, accurate navigation purposes. However, the current techniques of gravity field modeling around small bodies are not achieved to the level of satisfaction. This thesis will address how the process of current gravity field characterization can be made more robust for future small body missions. First we perform the covariance analysis around small bodies via multiple slow flybys. Flyby characterization requires less laborious scheduling than its orbit counterpart, simultaneously reducing the risk of impact into the asteroid's surface. It will be shown that the level of initial characterization that can occur with this approach is no less than the orbit approach. Next, we apply the same technique of gravity field characterization to estimate the spin state of 4179 Touatis, which is a near-Earth asteroid in close to 4:1 resonance with the Earth. The data accumulated from 1992-2008 are processed in a least-squares filter to predict Toutatis' orientation during the 2012 apparition. The center-of-mass offset and the moments of inertia estimated thereof can be used to constrain the internal density distribution within the body. Then, the spin state estimation is developed to a generalized method to estimate the internal density distribution within a small body. The density distribution is estimated from the orbit determination solution of the gravitational coefficients. It will be shown that the surface gravity field reconstructed from the estimated density distribution yields higher accuracy than the conventional gravity field models. Finally, we will investigate two types of relatively unknown gravity fields, namely the interior gravity field and interior spherical Bessel gravity field, in order to investigate how accurately the surface gravity field can be mapped out for proximity operations purposes. It will be shown that these formulations compute the surface gravity field with unprecedented accuracy for a well-chosen set of parametric settings, both regionally and globally.
Sovada, Marsha A.; Woodward, Robert O.; Igl, Lawrence D.
2009-01-01
The Swift Fox (Vulpes velox) was once common in the shortgrass and mixed-grass prairies of the Great Plains of North America. The species' abundance declined and its distribution retracted following European settlement of the plains. By the late 1800s, the species had been largely extirpated from the northern portion of its historical range, and its populations were acutely depleted elsewhere. Swift Fox populations have naturally recovered somewhat since the 1950s, but overall abundance and distribution remain below historical levels. In a 1995 assessment of the species' status under the US Endangered Species Act, the US Fish and Wildlife Service concluded that a designation of threatened or endangered was warranted, but the species was "precluded from listing by higher listing priorities." A major revelation of the 1995 assessment was the recognition that information useful for determining population status was limited. Fundamental information was missing, including an accurate estimate of the species' distribution before European settlement and an estimate of the species' current distribution and trends. The objectives of this paper are to fill those gaps in knowledge. Historical records were compiled and, in combination with knowledge of the habitat requirements of the species, the historical range of the Swift Fox is estimated to be approximately 1.5 million km2. Using data collected between 2001 and 2006, the species' current distribution is estimated to be about 44% of its historical range in the United States and 3% in Canada. Under current land use, approximately 39% of the species' historical range contains grassland habitats with very good potential for Swift Fox occupation and another 10% supports grasslands with characteristics that are less preferred (e.g., a sparse shrub component or taller stature) but still suitable. Additionally, land use on at least 25% of the historical range supports dryland farming, which can be suitable for Swift Fox occupation. In the United States, approximately 52% of highest quality habitats currently available are occupied by Swift Foxes.
Climate change, humidity, and mortality in the United States
Barreca, Alan I.
2014-01-01
This paper estimates the effects of humidity and temperature on mortality rates in the United States (c. 1973–2002) in order to provide an insight into the potential health impacts of climate change. I find that humidity, like temperature, is an important determinant of mortality. Coupled with Hadley CM3 climate-change predictions, I project that mortality rates are likely to change little on the aggregate for the United States. However, distributional impacts matter: mortality rates are likely to decline in cold and dry areas, but increase in hot and humid areas. Further, accounting for humidity has important implications for evaluating these distributional effects. PMID:25328254
The Burden of Pulmonary Nontuberculous Mycobacterial Disease in the United States
Strollo, Sara E.; Adjemian, Jennifer; Adjemian, Michael K.
2015-01-01
Rationale: State-specific case numbers and costs are critical for quantifying the burden of pulmonary nontuberculous mycobacterial disease in the United States. Objectives: To estimate and project national and state annual cases of nontuberculous mycobacterial disease and associated direct medical costs. Methods: Available direct cost estimates of nontuberculous mycobacterial disease medical encounters were applied to nontuberculous mycobacterial disease prevalence estimates derived from Medicare beneficiary data (2003–2007). Prevalence was adjusted for International Classification of Diseases, 9th Revision, undercoding and the inclusion of persons younger than 65 years of age. U.S. Census Bureau data identified 2010 and 2014 population counts and 2012 primary insurance-type distribution. Medical costs were reported in constant 2014 dollars. Projected 2014 estimates were adjusted for population growth and assumed a previously published 8% annual growth rate of nontuberculous mycobacterial disease prevalence. Measurements and Main Results: In 2010, we estimated 86,244 national cases, totaling to $815 million, of which 87% were inpatient related ($709 million) and 13% were outpatient related ($106 million). Annual state estimates varied from 48 to 12,544 cases ($503,000–$111 million), with a median of 1,208 cases ($11.5 million). Oceanic coastline states and Gulf States comprised 70% of nontuberculous mycobacterial disease cases but 60% of the U.S. population. Medical encounters among individuals aged 65 years and older ($562 million) were twofold higher than those younger than 65 years of age ($253 million). Of all costs incurred, medications comprised 76% of nontuberculous mycobacterial disease expenditures. Projected 2014 estimates resulted in 181,037 national annual cases ($1.7 billion). Conclusions: For a relatively rare disease, the financial cost of nontuberculous mycobacterial disease is substantial, particularly among older adults. Better data on disease dynamics and more recent prevalence estimates will generate more robust estimates. PMID:26214350
Moore, H.J.; Boyce, J.M.; Hahn, D.A.
1980-01-01
Apparently, there are two types of size-frequency distributions of small lunar craters (???1-100 m across): (1) crater production distributions for which the cumulative frequency of craters is an inverse function of diameter to power near 2.8, and (2) steady-state distributions for which the cumulative frequency of craters is inversely proportional to the square of their diameters. According to theory, cumulative frequencies of craters in each morphologic category within the steady-state should also be an inverse function of the square of their diameters. Some data on frequency distribution of craters by morphologic types are approximately consistent with theory, whereas other data are inconsistent with theory. A flux of crater producing objects can be inferred from size-frequency distributions of small craters on the flanks and ejecta of craters of known age. Crater frequency distributions and data on the craters Tycho, North Ray, Cone, and South Ray, when compared with the flux of objects measured by the Apollo Passive Seismometer, suggest that the flux of objects has been relatively constant over the last 100 m.y. (within 1/3 to 3 times of the flux estimated for Tycho). Steady-state frequency distributions for craters in several morphologic categories formed the basis for estimating the relative ages of craters and surfaces in a system used during the Apollo landing site mapping program of the U.S. Geological Survey. The relative ages in this system are converted to model absolute ages that have a rather broad range of values. The range of values of the absolute ages are between about 1/3 to 3 times the assigned model absolute age. ?? 1980 D. Reidel Publishing Co.
Small area variation in diabetes prevalence in Puerto Rico.
Tierney, Edward F; Burrows, Nilka R; Barker, Lawrence E; Beckles, Gloria L; Boyle, James P; Cadwell, Betsy L; Kirtland, Karen A; Thompson, Theodore J
2013-06-01
To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. A Bayesian multilevel model was fitted to the combined 2008-2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%-18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%-18.2%).
Consensus-based distributed estimation in multi-agent systems with time delay
NASA Astrophysics Data System (ADS)
Abdelmawgoud, Ahmed
During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.
NASA Technical Reports Server (NTRS)
1978-01-01
The economic benefits attributable to a variety of potential technological improvements in agricultural aviation are discussed. Topics covered include: the ag-air industry, the data base used to estimate the potential benefits and a summary of the potential benefits from technological improvements; ag-air activities in the United States; foreign ag-air activities; major ag-air aircraft is use and manufacturers' sales and distribution networks; and estimates of the benefits to the United States of proposed technological improvements to the aircraft and dispersal equipment. A bibliography of references is appended.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Qualitative human body composition analysis assessed with bioelectrical impedance.
Talluri, T
1998-12-01
Body composition is generally aiming at quantitative estimates of fat mass, inadequate to assess nutritional states that on the other hand are well defined by the intra/extra cellular masses proportion (ECM/BCM). Direct measures performed with phase sensitive bioelectrical impedance analyzers can be used to define the current distribution in normal and abnormal populations. Phase angle and reactance nomogram is directly reflecting the ECM/BCM pathways proportions and body impedance analysis (BIA) is also validated to estimate the individual content of body cell mass (BCM). A new body cell mass index (BCMI) obtained dividing the weight of BCM in kilograms by the body surface in square meters is confronted to the scatterplot distribution of phase angle and reactance values obtained from controls and patients, and proposed as a qualitative approach to identify abnormal ECM/BCM ratios and nutritional states.
Computer Vision Tracking Using Particle Filters for 3D Position Estimation
2014-03-27
the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is...probability distribution (unless otherwise noted) π proposal distribution ω importance weight i index of normalized weights δ Dirac -delta function x...p(x) and the importance weights, where δ is the Dirac delta function [2, p. 178]. p(x) = N∑ n=1 ωnδ (x − xn) (2.14) ωn ∝ p(x) π(x) (2.15) Applying
Precipitation and Diabatic Heating Distributions from TRMM/GPM
NASA Astrophysics Data System (ADS)
Olson, W. S.; Grecu, M.; Wu, D.; Tao, W. K.; L'Ecuyer, T.; Jiang, X.
2016-12-01
The initial focus of our research effort was the development of a physically-based methodology for estimating 3D precipitation distributions from a combination of spaceborne radar and passive microwave radiometer observations. This estimation methodology was originally developed for applications to Global Precipitation Measurement (GPM) mission sensor data, but it has recently been adapted to Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and Microwave Imager observations. Precipitation distributions derived from the TRMM sensors are interpreted using cloud-system resolving model simulations to infer atmospheric latent+eddy heating (Q1-QR) distributions in the tropics and subtropics. Further, the estimates of Q1-QR are combined with estimates of radiative heating (QR), derived from TRMM Microwave Imager and Visible and Infrared Scanner data as well as environmental properties from NCEP reanalyses, to yield estimates of the large-scale total diabatic heating (Q1). A thirteen-year database of precipitation and diabatic heating is constructed using TRMM observations from 1998-2010 as part of NASA's Energy and Water cycle Study program. State-dependent errors in precipitation and heating products are evaluated by propagating the potential errors of a priori modeling assumptions through the estimation method framework. Knowledge of these errors is critical for determining the "closure" of global water and energy budgets. Applications of the precipitation/heating products to climate studies will be presented at the conference.
Anomaly Detection in Test Equipment via Sliding Mode Observers
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Drakunov, Sergey V.
2012-01-01
Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.
Entanglement-Enhanced Phase Estimation without Prior Phase Information
NASA Astrophysics Data System (ADS)
Colangelo, G.; Martin Ciurana, F.; Puentes, G.; Mitchell, M. W.; Sewell, R. J.
2017-06-01
We study the generation of planar quantum squeezed (PQS) states by quantum nondemolition (QND) measurement of an ensemble of
NASA Astrophysics Data System (ADS)
Koshinchanov, Georgy; Dimitrov, Dobri
2008-11-01
The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; Next method is considering only the intensive rainfalls (if any) during the day with the maximal annual daily precipitation total for a given year; Conclusions are drown on the relevance and adequacy of the applied methods.
A Global Carbon Assimilation System using a modified EnKF assimilation method
NASA Astrophysics Data System (ADS)
Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.
2014-10-01
A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.
NASA Technical Reports Server (NTRS)
Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.
1993-01-01
The following monthly mean global distributions for 1990 are proposed with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States (US) Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components on the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation values are displayed. Annual mean distributions are displayed.
NASA Technical Reports Server (NTRS)
Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.
1993-01-01
The following monthly mean global distributions for 1991 are presented with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free-drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components of the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation value are displayed. Annual mean distributions are displayed.
Han, Fang; Liu, Han
2016-01-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson’s sample correlation matrix. Although Pearson’s sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall’s tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall’s tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall’s tau correlation matrix and the latent Pearson’s correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of “effective rank” in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a “sign subgaussian condition” which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition. PMID:28337068
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ryzhkov, V.; Morozov, I.
2018-01-01
The paper presents the calculating results of the combustion products parameters in the tract of the low thrust rocket engine with thrust P ∼ 100 N. The article contains the following data: streamlines, distribution of total temperature parameter in the longitudinal section of the engine chamber, static temperature distribution in the cross section of the engine chamber, velocity distribution of the combustion products in the outlet section of the engine nozzle, static temperature near the inner wall of the engine. The presented parameters allow to estimate the efficiency of the mixture formation processes, flow of combustion products in the engine chamber and to estimate the thermal state of the structure.
NASA Astrophysics Data System (ADS)
Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.
2016-12-01
In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward
Morphological study on the prediction of the site of surface slides
Hiromasa Hiura
1991-01-01
The annual continual occurrence of surface slides in the basin was estimated by modifying the estimation formula of Yoshimatsu. The Weibull Distribution Function revealed to be usefull for presenting the state and the transition of surface slides in the basin. Three parameters of the Weibull Function are recognized to be the linear function of the area ratio a/A. The...
Regional distribution of forest height and biomass from multisensor data fusion
Yifan Yu; Sassan Saatch; Linda S. Heath; Elizabeth LaPoint; Ranga Myneni; Yuri Knyazikhin
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM...
Ergon, T.; Yoccoz, N.G.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In many species, age or time of maturation and survival costs of reproduction may vary substantially within and among populations. We present a capture-mark-recapture model to estimate the latent individual trait distribution of time of maturation (or other irreversible transitions) as well as survival differences associated with the two states (representing costs of reproduction). Maturation can take place at any point in continuous time, and mortality hazard rates for each reproductive state may vary according to continuous functions over time. Although we explicitly model individual heterogeneity in age/time of maturation, we make the simplifying assumption that death hazard rates do not vary among individuals within groups of animals. However, the estimates of the maturation distribution are fairly robust against individual heterogeneity in survival as long as there is no individual level correlation between mortality hazards and latent time of maturation. We apply the model to biweekly capture?recapture data of overwintering field voles (Microtus agrestis) in cyclically fluctuating populations to estimate time of maturation and survival costs of reproduction. Results show that onset of seasonal reproduction is particularly late and survival costs of reproduction are particularly large in declining populations.
Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz
2005-01-01
We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...
Nathenson, Manuel
1984-01-01
The amount of thermal energy in high-temperature geothermal systems (>150 degree C) in the United States has been calculated by estimating the temperature, area, and thickness of each identified system. These data, along with a general model for recoverability of geothermal energy and a calculation that takes account of the conversion of thermal energy to electricity, yield a resource estimate of 23,000 MWe for 30 years. The undiscovered component was estimated based on multipliers of the identified resource as either 72,000 or 127,000 MWe for 30 years depending on the model chosen for the distribution of undiscovered energy as a function of temperature.
Heavy ion charge-state distribution effects on energy loss in plasmas.
Barriga-Carrasco, Manuel D
2013-10-01
According to dielectric formalism, the energy loss of the heavy ion depends on its velocity and its charge density. Also, it depends on the target through its dielectric function; here the random phase approximation is used because it correctly describes fully ionized plasmas at any degeneracy. On the other hand, the Brandt-Kitagawa (BK) model is employed to depict the projectile charge space distribution, and the stripping criterion of Kreussler et al. is used to determine its mean charge state [Q]. This latter criterion implies that the mean charge state depends on the electron density and temperature of the plasma. Also, the initial charge state of the heavy ion is crucial for calculating [Q] inside the plasma. Comparing our models and estimations with experimental data, a very good agreement is found. It is noticed that the energy loss in plasmas is higher than that in the same cold gas cases, confirming the well-known enhanced plasma stopping (EPS). In this case, EPS is only due to the increase in projectile effective charge Q(eff), which is obtained as the ratio between the energy loss of each heavy ion and that of the proton in the same plasma conditions. The ratio between the effective charges in plasmas and in cold gases is higher than 1, but it is not as high as thought in the past. Finally, another significant issue is that the calculated effective charge in plasmas Q(eff) is greater than the mean charge state [Q], which is due to the incorporation of the BK charge distribution. When estimations are performed without this distribution, they do not fit well with experimental data.
NASA Astrophysics Data System (ADS)
Brown-Steiner, Benjamin; Hess, Peter; Chen, Jialie; Donaghy, Kieran
2016-03-01
We have developed a framework to estimate BC emissions from heavy-duty diesel trucks and trains engaged in transporting freight in the Midwestern and Northeastern United States (MNUS) from 1977 to 2007. We first expand on a previous development of a regional econometric input-output model (REIM) that has been used to estimate commodity flows between 13 states in the MNUS (plus the rest of the US) and 13 industrial sectors. These commodity flow data are then distributed over the MNUS using a stylized link-and-node network, which creates great circle transportation links between nodes in each state at the county with the largest population. Freight flows are converted to BC transportation emissions and the resulting BC emissions are compared to the MACCity BC emissions inventory. We find that from 1977 to 2007 potential emission growth from the continued increase in freight tonnage in the MWUS is counteracted by decreases in the BC emission factor of heavy-duty diesel trucks, which results in an overall decrease of BC emissions by 2007. One sector (fabricated metal product manufacturing) has dominated the BC transportation emissions throughout 1977 to 2007 with transportation emissions remaining relatively unchanged from 1977 to 1997 and then decreasing out to 2007. The BC transportation emissions are concentrated in and around the urban centers, which serve as transportation and production nodes for industrial manufacturing. Our BC emissions are distributed along stylized transportation corridors that are not well represented in emissions inventories that largely distribute emissions via a population proxy. The framework established in this study can be used to estimate future BC transportation emissions under a set of stylized economic, technological, and regulatory scenarios.
Ludington, S.D.; Cox, D.P.; McCammon, R.B.
1996-01-01
For this assessment, the conterminous United States was divided into 12 regions Adirondack Mountains, Central and Southern Rocky Mountains, Colorado Plateau, East Central, Great Basin, Great Plains, Lake Superior, Northern Appalachians, Northern Rocky Mountains, Pacific Coast, Southern Appalachians, and Southern Basin and Range. The assessment, which was conducted by regional assessment teams of scientists from the USGS, was based on the concepts of permissive tracts and deposit models. Permissive tracts are discrete areas of the United States for which estimates of numbers of undiscovered deposits of a particular deposit type were made. A permissive tract is defined by its geographic boundaries such that the probability of deposits of the type delineated occurring outside the boundary is neglible. Deposit models, which are based on a compilation of worldwide literature and on observation, are sets of data in a convenient form that describe a group of deposits which have similar characteristics and that contain information on the common geologic attributes of the deposits and the environments in which they are found. Within each region, the assessment teams delineated permissive tracts for those deposit models that were judged to be appropriate and, when the amount of information warranted, estimated the number of undiscovered deposits. A total of 46 deposit models were used to assess 236 separate permissive tracts. Estimates of undiscovered deposits were limited to a depth of 1 km beneath the surface of the Earth. The estimates of the number of undiscovered deposits of gold, silver, copper, lead, and zinc were expressed in the form of a probability distribution. Commonly, the number of undiscovered deposits was estimated at the 90th, 50th, and 10th percentiles. A Monte Carlo simulation computer program was used to combine the probability distribution of the number of undiscovered deposits with the grade and tonnage data sets associated with each deposit model to obtain the probability distribution for undiscovered metal.
Federated States of Micronesia's forest resources, 2006
Joseph A. Donnegan; Sarah L. Butler; Olaf Kuegler; Bruce A. Hiserote
2011-01-01
The Forest Inventory and Analysis program collected, analyzed, and summarized field data on 73 forested field plots on the islands of Kosrae, Chuuk, Pohnpei, and Yap in the Federated States of Micronesia (FSM). Estimates of forest area, tree stem volume and biomass, the numbers of trees, tree damages, and the distribution of tree sizes were summarized for this...
Introduction: In effect estimates of city-specific PM2.5-mortality associations across United States (US), there exists a substantial amount of spatial heterogeneity. Some of this heterogeneity may be due to mass distribution of PM; areas where PM2.5 is likely to be dominated by ...
Method for estimating pesticide use for county areas of the conterminous United States
Thelin, Gail P.; Gianessi, Leonard P.
2000-01-01
Information on the amount and distribution of pesticide compounds used throughout the United States is essential to evaluate the relation between water quality and pesticide use. This information is the basis of the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program studies of the effects of pesticides on water quality in 57 major hydrologic systems, or study units, located throughout the conterminous United States. To support these studies, a method was devised to estimate county pesticide use for the conterminous United States by combining (1) state-level information on pesticide use rates available from the National Center for Food and Agricultural Policy, and (2) county-level information on harvested crop acreage from the Census of Agriculture. The average annual pesticide use, the total amount of pesticides applied (in pounds), and the corresponding area treated (in acres) were compiled for the 208 pesticide compounds that are applied to crops in the conterminous United States. Pesticide use was ranked by compound and crop on the basis of the amount of each compound applied to 86 selected crops. Tabular summaries of pesticide use for NAWQA study units and for the Nation were prepared, along with maps that show the distribution of selected pesticides to agricultural land.
Davies, Holly; Delistraty, Damon
2016-02-01
Polychlorinated biphenyls (PCBs) are ubiquitously distributed in the environment and produce multiple adverse effects in humans and wildlife. As a result, the purpose of our study was to characterize PCB sources in anthropogenic materials and releases to the environment in Washington State (USA) in order to formulate recommendations to reduce PCB exposures. Methods included review of relevant publications (e.g., open literature, industry studies and reports, federal and state government databases), scaling of PCB sources from national or county estimates to state estimates, and communication with industry associations and private and public utilities. Recognizing high associated uncertainty due to incomplete data, we strived to provide central tendency estimates for PCB sources. In terms of mass (high to low), PCB sources include lamp ballasts, caulk, small capacitors, large capacitors, and transformers. For perspective, these sources (200,000-500,000 kg) overwhelm PCBs estimated to reside in the Puget Sound ecosystem (1500 kg). Annual releases of PCBs to the environment (high to low) are attributed to lamp ballasts (400-1500 kg), inadvertent generation by industrial processes (900 kg), caulk (160 kg), small capacitors (3-150 kg), large capacitors (10-80 kg), pigments and dyes (0.02-31 kg), and transformers (<2 kg). Recommendations to characterize the extent of PCB distribution and decrease exposures include assessment of PCBs in buildings (e.g., schools) and replacement of these materials, development of Best Management Practices (BMPs) to contain PCBs, reduction of inadvertent generation of PCBs in consumer products, expansion of environmental monitoring and public education, and research to identify specific PCB congener profiles in human tissues.
Per capita alcohol consumption and suicide mortality in a panel of US states from 1950 to 2002
Kerr, William C.; Subbaraman, Meenakshi; Ye, Yu
2011-01-01
Introduction and Aims The relationship between per capita alcohol consumption and suicide rates has been found to vary in significance and magnitude across countries. This study utilizes a panel of time-series measures from the US states to estimate the effects of changes in current and lagged alcohol sales on suicide mortality risk. Design and Methods Generalized least squares estimation utilized 53 years of data from 48 US states or state groups to estimate relationships between total and beverage-specific alcohol consumption measures and age-standardized suicide mortality rates in first-differenced semi-logged models. Results An additional liter of ethanol from total alcohol sales was estimated to increase suicide rates by 2.3% in models utilizing a distributed lag specification while no effect was found in models including only current alcohol consumption. A similar result is found for men, while for women both current and distributed lag measures were found to be significantly related to suicide rates with an effect of about 3.2% per liter from current and 5.8% per liter from the lagged measure. Beverage-specific models indicate that spirits is most closely linked with suicide risk for women while beer and wine are for men. Unemployment rates are consistently positively related to suicide rates. Discussion and Conclusions Results suggest that chronic effects, potentially related to alcohol abuse and dependence, are the main source of alcohol’s impact on suicide rates in the US for men and are responsible for about half of the effect for women. PMID:21896069
Isonymy structure of four Venezuelan states.
Rodríguez-Larralde, A; Barrai, I; Alfonzo, J C
1993-01-01
The isonymy structure of four Venezuelan States-Falcón, Mérida, Nueva Esparta and Yaracuy-was studied using the surnames of the Venezuelan register of electors updated in 1984. The surname distributions of 155 counties were obtained and, for each county, estimates of consanguinity due to random isonymy and Fisher's alpha were calculated. It was shown that for large sample sizes the inverse of Fisher's alpha is identical to the unbiased estimate of within-population random isonymy. A three-dimensional isometric surface plot was obtained for each State, based on the counties' random isonymy estimates. The highest estimates of random consanguinity were found in the States of Nueva Esparta and Mérida, while the lowest were found in Yaracuy. Other microdifferentiation indicators from the same data gave similar results, and an interpretation was attempted, based on the particular economic and geographic characteristics of each State. Four different genetic distances between all possible pairs of counties were calculated within States; geographic distance shows the highest correlations with random isonymy and Euclidean distance, with the exception of the State of Nueva Esparta, where there is no correlation between geographic distance and random isonymy. It was possible to group counties in clusters, from dendrograms based on Euclidean distance. Isonymy clustering was also consistent with socioeconomic and geographic characteristics of the counties.
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
Lee, Joyce M; Davis, Matthew M; Menon, Ram K; Freed, Gary L
2008-03-01
To determine the geographic distribution of childhood diabetes and obesity relative to the supply of US pediatric endocrinologists. Estimation of observed and "index" ratios of children with diabetes (by region and division) and obesity (body mass index >/=95th % for age and sex) (by region and state) to board-certified pediatric endocrinologists. At the national level, the ratio of children with diabetes to pediatric endocrinologists is 290:1, and the ratio of obese children to pediatric endocrinologists is 17,741:1. Ratios of children with diabetes to pediatric endocrinologists in the Midwest (370:1), South (335:1), and West (367:1) are twice as high as in the Northeast (144:1). Across states, there is up to a 19-fold difference in the observed ratios of obese children to pediatric endocrinologists. Under conditions of equitably distributed endocrinologist supply, variation across states would be mitigated considerably. The distribution of children with diabetes and obesity does not parallel the distribution of pediatric endocrinologists in the United States, due largely to geographic disparities in endocrinologist supply. Given the large burden of obese children to endocrinologists, multidisciplinary models of care delivery are essential for the US health care system to address the needs of children with diabetes and obesity.
Steady state whistler turbulence and stability of thermal barriers in tandem mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litwin, C.; Sudan, R.N.
The effect of the whistler turbulence on anisotropic electrons in a thermal barrier is examined. The electron distribution function is derived self-consistently by solving the steady state quasilinear diffusion equation. Saturated amplitudes are computed using the resonance broadening theory or convective stabilization. Estimated power levels necessary for sustaining the steady state of a strongly anisotropic electron population are found to exceed by orders of magnitude the estimates based on Fokker--Planck calculations for the range of parameters of tandem mirror (TMX-U and MFTF-B) experiments (Nucl. Fusion 25, 1205 (1985)). Upper limits on the allowed degree of anisotropy for existing power densitiesmore » are calculated.« less
NASA Astrophysics Data System (ADS)
Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.
2018-03-01
A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.
Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina
2016-10-21
In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.
NASA Astrophysics Data System (ADS)
Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.
2017-03-01
We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexafluoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefficient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM fields. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.
NASA Technical Reports Server (NTRS)
Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.
2017-01-01
We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexauoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefcient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM elds. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
Mcdonald, P. Sean; Essington, Timothy E.; Davis, Jonathan P.; Galloway, Aaron W.E.; Stevick, Bethany C.; Jensen, Gregory C.; VanBlaricom, Glenn R.; Armstrong, David A.
2015-01-01
Marine bivalves are important ecosystem constituents and frequently support valuable fisheries. In many nearshore areas, human disturbance—including declining habitat and water quality—can affect the distribution and abundance of bivalve populations, and complicate ecosystem and fishery management assessments. Infaunal bivalves, in particular, are frequently cryptic and difficult to detect; thus, assessing potential impacts on their populations requires suitable, scalable methods for estimating abundance and distribution. In this study, population size of a common benthic bivalve (the geoduck Panopea generosa) is estimated with a Bayesian habitat-based model fit to scuba and tethered camera data in Hood Canal, a fjord basin in Washington state. Densities declined more than two orders of magnitude along a north—south gradient, concomitant with patterns of deepwater dissolved oxygen, and intensity and duration of seasonal hypoxia. Across the basin, geoducks were most abundant in loose, unconsolidated, sand substrate. The current study demonstrates the utility of using scuba, tethered video, and habitat models to estimate the abundance and distribution of a large infaunal bivalve at a regional (385-km2) scale.
Modeling habitat dynamics accounting for possible misclassification
Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.
2012-01-01
Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.
Revised spatially distributed global livestock emissions
NASA Astrophysics Data System (ADS)
Asrar, G.; Wolf, J.; West, T. O.
2015-12-01
Livestock play an important role in agricultural carbon cycling through consumption of biomass and emissions of methane. Quantification and spatial distribution of methane and carbon dioxide produced by livestock is needed to develop bottom-up estimates for carbon monitoring. These estimates serve as stand-alone international emissions estimates, as input to global emissions modeling, and as comparisons or constraints to flux estimates from atmospheric inversion models. Recent results for the US suggest that the 2006 IPCC default coefficients may underestimate livestock methane emissions. In this project, revised coefficients were calculated for cattle and swine in all global regions, based on reported changes in body mass, quality and quantity of feed, milk production, and management of living animals and manure for these regions. New estimates of livestock methane and carbon dioxide emissions were calculated using the revised coefficients and global livestock population data. Spatial distribution of population data and associated fluxes was conducted using the MODIS Land Cover Type 5, version 5.1 (i.e. MCD12Q1 data product), and a previously published downscaling algorithm for reconciling inventory and satellite-based land cover data at 0.05 degree resolution. Preliminary results for 2013 indicate greater emissions than those calculated using the IPCC 2006 coefficients. Global total enteric fermentation methane increased by 6%, while manure management methane increased by 38%, with variation among species and regions resulting in improved spatial distributions of livestock emissions. These new estimates of total livestock methane are comparable to other recently reported studies for the entire US and the State of California. These new regional/global estimates will improve the ability to reconcile top-down and bottom-up estimates of methane production as well as provide updated global estimates for use in development and evaluation of Earth system models.
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...
2017-08-17
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
NASA Technical Reports Server (NTRS)
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
Electric Power Consumption Coefficients for U.S. Industries: Regional Estimation and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo
Economic activity relies on electric power provided by electrical generation, transmission, and distribution systems. This paper presents a method developed at Los Alamos National Laboratory to estimate electric power consumption by different industries in the United States. Results are validated through comparisons with existing literature and benchmarking data sources. We also discuss the limitations and applications of the presented method, such as estimating indirect electric power consumption and assessing the economic impact of power outages based on input-output economic models.
2013-09-01
model and the BRDF in the SRP model are not consistent with each other, then the resulting estimated albedo-areas and mass are inaccurate and biased...This work studies the use of physically consistent BRDF -SRP models for mass estimation. Simulation studies are used to provide an indication of the...benefits of using these new models . An unscented Kalman filter approach that includes BRDF and mass parameters in the state vector is used. The
Moore, Latetia V; Dodd, Kevin W; Thompson, Frances E; Grimm, Kirsten A; Kim, Sonia A; Scanlon, Kelley S
2015-01-01
Most Americans do not eat enough fruits and vegetables with significant variation by state. State-level self-reported frequency of fruit and vegetable consumption is available from the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System (BRFSS). However, BRFSS cannot be used to directly compare states’ progress towards national goals because of incongruence in units used to measure intake and because distributions from frequency data are not reflective of usual intake. To help states track progress, we developed scoring algorithms from external data and applied them to 2011 BRFSS data to estimate the percent of each state’s adult population meeting United States Department of Agriculture Food Patterns fruit and vegetable intake recommendations. We used 24 hour dietary recall data from the 2007–2010 National Health and Nutrition Examination Survey to fit sex- and age-specific models that estimate probabilities of meeting recommendations as functions of reported consumption frequency, race/ethnicity, and poverty-income ratio adjusting for intra-individual variation. Regression parameters derived from these models were applied to BRFSS to estimate percent meeting recommendations. We estimate that 7–18% of state populations met fruit recommendations and 5–12% met vegetable recommendations. Our method provides a new tool for states to track progress towards meeting dietary recommendations. PMID:25935424
The temporal and spatial distributions of primary and secondary organic carbon aerosols (OC) over the continental US during June 15 to August 31, 1999, were estimated by using observational OC and elemental carbon (EC) data from Interagency Monitoring of Protected Visual Environm...
Development and Testing of a Coupled Ocean-atmosphere Mesoscale Ensemble Prediction System
2011-06-28
wind, temperature, and moisture variables, while the oceanographic ET is derived from ocean current, temperature, and salinity variables. Estimates of...wind, temperature, and moisture variables while the oceanographic ET is derived from ocean current temperature, and salinity variables. Estimates of...uncertainty in the model. Rigorously accurate ensemble methods for describing the distribution of future states given past information include particle
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
Woskie, S R; Smith, T J; Hammond, S K; Schenker, M B; Garshick, E; Speizer, F E
1988-01-01
The diesel exhaust exposures of railroad workers in thirteen job groups from four railroads in the United States were used to estimate U.S. national average exposures with a linear statistical model which accounts for the significant variability in exposure caused by climate, the differences among railroads and the uneven distribution of railroad workers across climatic regions. Personal measurements of respirable particulate matter, adjusted to remove the contribution of cigarette smoke particles, were used as a marker for diesel exhaust. The estimated national means of adjusted respirable particulate matter (ARP) averaged 10 micrograms/m3 lower than the simple means for each job group, reflecting the climatic differences between the northern railroads studied and the distribution of railroad workers nationally. Limited historical records, including some industrial hygiene data, were used to evaluate past diesel exhaust exposures, which were estimated to be approximately constant from the 1950's to 1983.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Solid-solution thermodynamics in Al-Li alloys
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Lukina, E. A.
2016-05-01
The relative equilibrium concentrations of lithium atoms distributed over different electron-structural states has been estimated. The possibility of the existence of various nonequilibrium electron-structural states of Li atoms in the solid solution in Al has been substantiated thermodynamically. Upon the decomposition of the supersaturated solid solution, the supersaturation on three electron-structural states of Li atoms that arises upon the quenching of the alloy can lead to the formation of lithium-containing phases in which the lithium atoms enter in one electron-structural state.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
Estimating rate constants from single ion channel currents when the initial distribution is known.
The, Yu-Kai; Fernandez, Jacqueline; Popa, M Oana; Lerche, Holger; Timmer, Jens
2005-06-01
Single ion channel currents can be analysed by hidden or aggregated Markov models. A classical result from Fredkin et al. (Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer, vol I, pp 269-289, 1985) states that the maximum number of identifiable parameters is bounded by 2n(o)n(c), where n(o) and n(c) denote the number of open and closed states, respectively. We show that this bound can be overcome when the probabilities of the initial distribution are known and the data consist of several sweeps.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Combes, Joshua; Cory, D. G.
2016-03-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Flood Frequency Analysis With Historical and Paleoflood Information
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Cohn, Timothy A.
1986-05-01
An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.
NASA Technical Reports Server (NTRS)
Andrews, J.
1977-01-01
An optimal decision model of crop production, trade, and storage was developed for use in estimating the economic consequences of improved forecasts and estimates of worldwide crop production. The model extends earlier distribution benefits models to include production effects as well. Application to improved information systems meeting the goals set in the large area crop inventory experiment (LACIE) indicates annual benefits to the United States of $200 to $250 million for wheat, $50 to $100 million for corn, and $6 to $11 million for soybeans, using conservative assumptions on expected LANDSAT system performance.
Illman, W.A.; Zhu, J.; Craig, A.J.; Yin, D.
2010-01-01
Groundwater modeling has become a vital component to water supply and contaminant transport investigations. An important component of groundwater modeling under steady state conditions is selecting a representative hydraulic conductivity (K) estimate or set of estimates which defines the K field of the studied region. Currently, there are a number of characterization approaches to obtain K at various scales and in varying degrees of detail, but there is a paucity of information in terms of which characterization approach best predicts flow through aquifers or drawdowns caused by some drawdown inducing events. The main objective of this paper is to assess K estimates obtained by various approaches by predicting drawdowns from independent cross-hole pumping tests and total flow rates through a synthetic heterogeneous aquifer from flow-through tests. Specifically, we (1) characterize a synthetic heterogeneous aquifer built in the sandbox through various techniques (permeameter analyses of core samples, single-hole, cross-hole, and flow-through testing), (2) obtain mean K fields through traditional analysis of test data by treating the medium to be homogeneous, (3) obtain heterogeneous K fields through kriging and steady state hydraulic tomography, and (4) conduct forward simulations of 16 independent pumping tests and six flowthrough tests using these homogeneous and heterogeneous K fields and comparing them to actual data. Results show that the mean K and heterogeneous K fields estimated through kriging of small-scale K data (core and single-hole tests) yield biased predictions of drawdowns and flow rates in this synthetic heterogeneous aquifer. In contrast, the heterogeneous K distribution or ?K tomogram? estimated via steady state hydraulic tomography yields excellent predictions of drawdowns of pumping tests not used in the construction of the tomogram and very good estimates of total flow rates from the flowthrough tests. These results suggest that steady state groundwater model validation is possible in this laboratory sandbox aquifer if the heterogeneous K distribution and forcing functions (boundary conditions and source/sink terms) are characterized sufficiently. ?? 2010 by the American Geophysical Union.
Self-Gravitating Fundamental Strings and Black Holes
NASA Technical Reports Server (NTRS)
Damour, T.; Veneziano, G.
1999-01-01
The configuration of typically highly excited M much greater than M(sub s) which is approximately equal to alpha(prime) to the 1/2 power string states is considered as the string coupling g is adiabatically increased. The size distribution of very massive single string states is studied and the mass shift, due to a long-range gravitational, dilatonic, and axionic attraction, is estimated.
Is the footprint of longleaf pine in the Southeastern United States still shrinking?
Christopher M. Oswalt; Christopher W. Woodall; Horace W. Brooks
2015-01-01
Longleaf pine (Pinus palustris Mill.) was once one of the most ecologically important tree species in the southern United States. Longleaf pine and the accompanying longleaf forest ecosystems covered vast swaths of the South. Longleaf forests covered an estimated 92 million acres at their peak distribution and represented one of the most extensive forest ecosystems in...
ERIC Educational Resources Information Center
Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D.
2017-01-01
There is no comprehensive database of U.S. district-level test scores that is comparable across states. We describe and evaluate a method for constructing such a database. First, we estimate linear, reliability-adjusted linking transformations from state test score scales to the scale of the National Assessment of Educational Progress (NAEP). We…
Enhancing Data Assimilation by Evolutionary Particle Filter and Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Moradkhani, H.; Abbaszadeh, P.; Yan, H.
2016-12-01
Particle Filters (PFs) have received increasing attention by the researchers from different disciplines in hydro-geosciences as an effective method to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation by means of data assimilation in hydrology and geoscience has evolved since 2005 from SIR-PF to PF-MCMC and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC. In this framework, the posterior distribution undergoes an evolutionary process to update an ensemble of prior states that more closely resemble realistic posterior probability distribution. The premise of this approach is that the particles move to optimal position using the GA optimization coupled with MCMC increasing the number of effective particles, hence the particle degeneracy is avoided while the particle diversity is improved. The proposed algorithm is applied on a conceptual and highly nonlinear hydrologic model and the effectiveness, robustness and reliability of the method in jointly estimating the states and parameters and also reducing the uncertainty is demonstrated for few river basins across the United States.
Estimating earnings losses due to mental illness: a quantile regression approach.
Marcotte, Dave E; Wilcox-Gök, Virginia
2003-09-01
The ability of workers to remain productive and sustain earnings when afflicted with mental illness depends importantly on access to appropriate treatment and on flexibility and support from employers. In the United States there is substantial variation in access to health care and sick leave and other employment flexibilities across the earnings distribution. Consequently, a worker's ability to work and how much his/her earnings are impeded likely depend upon his/her position in the earnings distribution. Because of this, focusing on average earnings losses may provide insufficient information on the impact of mental illness in the labor market. In this paper, we examine the effects of mental illness on earnings by recognizing that effects could vary across the distribution of earnings. Using data from the National Comorbidity Survey, we employ a quantile regression estimator to identify the effects at key points in the earnings distribution. We find that earnings effects vary importantly across the distribution. While average effects are often not large, mental illness more commonly imposes earnings losses at the lower tail of the distribution, especially for women. In only one case do we find an illness to have negative effects across the distribution. Mental illness can have larger negative impacts on economic outcomes than previously estimated, even if those effects are not uniform. Consequently, researchers and policy makers alike should not be placated by findings that mean earnings effects are relatively small. Such estimates miss important features of how and where mental illness is associated with real economic losses for the ill.
Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.
Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li
2018-06-01
State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
NASA Astrophysics Data System (ADS)
Liu, Jie; Wang, Wilson; Ma, Fai
2011-07-01
System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Pepin, Kim M; Eisen, Rebecca J; Mead, Paul S; Piesman, Joseph; Fish, Durland; Hoen, Anne G; Barbour, Alan G; Hamer, Sarah; Diuk-Wasser, Maria A
2012-06-01
Prevention and control of Lyme disease is difficult because of the complex biology of the pathogen's (Borrelia burgdorferi) vector (Ixodes scapularis) and multiple reservoir hosts with varying degrees of competence. Cost-effective implementation of tick- and host-targeted control methods requires an understanding of the relationship between pathogen prevalence in nymphs, nymph abundance, and incidence of human cases of Lyme disease. We quantified the relationship between estimated acarological risk and human incidence using county-level human case data and nymphal prevalence data from field-derived estimates in 36 eastern states. The estimated density of infected nymphs (mDIN) was significantly correlated with human incidence (r = 0.69). The relationship was strongest in high-prevalence areas, but it varied by region and state, partly because of the distribution of B. burgdorferi genotypes. More information is needed in several high-prevalence states before DIN can be used for cost-effectiveness analyses.
A maximum entropy thermodynamics of small systems.
Dixit, Purushottam D
2013-05-14
We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
Neural Correlates of the Divergence of Instrumental Probability Distributions
Wang, Shuo; Zhang, June; O'Doherty, John P.
2013-01-01
Flexible action selection requires knowledge about how alternative actions impact the environment: a “cognitive map” of instrumental contingencies. Reinforcement learning theories formalize this map as a set of stochastic relationships between actions and states, such that for any given action considered in a current state, a probability distribution is specified over possible outcome states. Here, we show that activity in the human inferior parietal lobule correlates with the divergence of such outcome distributions–a measure that reflects whether discrimination between alternative actions increases the controllability of the future–and, further, that this effect is dissociable from those of other information theoretic and motivational variables, such as outcome entropy, action values, and outcome utilities. Our results suggest that, although ultimately combined with reward estimates to generate action values, outcome probability distributions associated with alternative actions may be contrasted independently of valence computations, to narrow the scope of the action selection problem. PMID:23884955
Small area variation in diabetes prevalence in Puerto Rico
Tierney, Edward F.; Burrows, Nilka R.; Barker, Lawrence E.; Beckles, Gloria L.; Boyle, James P.; Cadwell, Betsy L.; Kirtland, Karen A.; Thompson, Theodore J.
2015-01-01
Objective To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. Methods A Bayesian multilevel model was fitted to the combined 2008–2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. Results The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%–18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. Conclusions These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%–18.2%). PMID:23939364
Crop yield response to climate change varies with crop spatial distribution pattern
Leng, Guoyong; Huang, Maoyi
2017-05-03
The linkage between crop yield and climate variability has been confirmed in numerous studies using statistical approaches. A crucial assumption in these studies is that crop spatial distribution pattern is constant over time. Here, we explore how changes in county-level corn spatial distribution pattern modulate the response of its yields to climate change at the state level over the Contiguous United States. Our results show that corn yield response to climate change varies with crop spatial distribution pattern, with distinct impacts on the magnitude and even the direction at the state level. Corn yield is predicted to decrease by 20~40%more » by 2050s when considering crop spatial distribution pattern changes, which is 6~12% less than the estimates with fixed cropping pattern. The beneficial effects are mainly achieved by reducing the negative impacts of daily maximum temperature and strengthening the positive impacts of precipitation. Our results indicate that previous empirical studies could be biased in assessing climate change impacts by ignoring the changes in crop spatial distribution pattern. As a result, this has great implications for understanding the increasing debates on whether climate change will be a net gain or loss for regional agriculture.« less
Crop yield response to climate change varies with crop spatial distribution pattern
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Guoyong; Huang, Maoyi
The linkage between crop yield and climate variability has been confirmed in numerous studies using statistical approaches. A crucial assumption in these studies is that crop spatial distribution pattern is constant over time. Here, we explore how changes in county-level corn spatial distribution pattern modulate the response of its yields to climate change at the state level over the Contiguous United States. Our results show that corn yield response to climate change varies with crop spatial distribution pattern, with distinct impacts on the magnitude and even the direction at the state level. Corn yield is predicted to decrease by 20~40%more » by 2050s when considering crop spatial distribution pattern changes, which is 6~12% less than the estimates with fixed cropping pattern. The beneficial effects are mainly achieved by reducing the negative impacts of daily maximum temperature and strengthening the positive impacts of precipitation. Our results indicate that previous empirical studies could be biased in assessing climate change impacts by ignoring the changes in crop spatial distribution pattern. As a result, this has great implications for understanding the increasing debates on whether climate change will be a net gain or loss for regional agriculture.« less
NASA Astrophysics Data System (ADS)
Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.
2017-12-01
A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
2012 Market Report on U.S. Wind Technologies in Distributed Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, Alice C.; Flowers, L. T.; Gagne, M. N.
2013-08-06
At the end of 2012, U.S. wind turbines in distributed applications reached a 10-year cumulative installed capacity of more than 812 MW from more than 69,000 units across all 50 states. In 2012 alone, nearly 3,800 wind turbines totaling 175 MW of distributed wind capacity were documented in 40 states and in the U.S. Virgin Islands, with 138 MW using utility-scale turbines (i.e., greater than 1 MW in size), 19 MW using mid-size turbines (i.e., 101 kW to 1 MW in size), and 18.4 MW using small turbines (i.e., up to 100 kW in size). Distributed wind is defined inmore » terms of technology application based on a wind project’s location relative to end-use and power-distribution infrastructure, rather than on technology size or project size. Distributed wind systems are either connected on the customer side of the meter (to meet the onsite load) or directly to distribution or micro grids (to support grid operations or offset large loads nearby). Estimated capacity-weighted average costs for 2012 U.S. distributed wind installations was $2,540/kW for utility-scale wind turbines, $2,810/kW for mid-sized wind turbines, and $6,960/kW for newly manufactured (domestic and imported) small wind turbines. An emerging trend observed in 2012 was an increased use of refurbished turbines. The estimated capacity-weighted average cost of refurbished small wind turbines installed in 2012 was $4,080/kW. As a result of multiple projects using utility-scale turbines, Iowa deployed the most new overall distributed wind capacity, 37 MW, in 2012. Nevada deployed the most small wind capacity in 2012, with nearly 8 MW of small wind turbines installed in distributed applications. In the case of mid-size turbines, Ohio led all states in 2012 with 4.9 MW installed in distributed applications. State and federal policies and incentives continued to play a substantial role in the development of distributed wind projects. In 2012, U.S. Treasury Section 1603 payments and grants and loans from the U.S. Department of Agriculture’s Rural Energy for America Program were the main sources of federal funding for distributed wind projects. State and local funding varied across the country, from rebates to loans, tax credits, and other incentives. Reducing utility bills and hedging against potentially rising electricity rates remain drivers of distributed wind installations. In 2012, other drivers included taking advantage of the expiring U.S. Treasury Section 1603 program and a prosperous year for farmers. While 2012 saw a large addition of distributed wind capacity, considerable barriers and challenges remain, such as a weak domestic economy, inconsistent state incentives, and very competitive solar photovoltaic and natural gas prices. The industry remains committed to improving the distributed wind marketplace by advancing the third-party certification process and introducing alternative financing models, such as third-party power purchase agreements and lease-to-own agreements more typical in the solar photovoltaic market. Continued growth is expected in 2013.« less
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
NASA Astrophysics Data System (ADS)
Li, X.; Omara, M.; Adams, P. J.; Presto, A. A.
2017-12-01
Methane is the second most powerful greenhouse gas after Carbon Dioxide. The natural gas production and distribution accounts for 23% of the total anthropogenic methane emissions in the United States. The boost of natural gas production in U.S. in recent years poses a potential concern of increased methane emissions from natural gas production and distribution. The Emission Database for Global Atmospheric Research (Edgar) v4.2 and the EPA Greenhouse Gas Inventory (GHGI) are currently the most commonly used methane emission inventories. However, recent studies suggested that both Edgar v4.2 and the EPA GHGI largely underestimated the methane emission from natural gas production and distribution in U.S. constrained by both ground and satellite measurements. In this work, we built a gridded (0.1° Latitude ×0.1° Longitude) methane emission inventory of natural gas production and distribution over the contiguous U.S. using emission factors measured by our mobile lab in the Marcellus Shale, the Denver-Julesburg Basin, and the Uintah Basin, and emission factors reported from other recent field studies for other natural gas production regions. The activity data (well location and count) are mostly obtained from the Drillinginfo, the EPA Greenhouse Gas Reporting Program (GHGRP) and the U.S. Energy Information Administration (EIA). Results show that the methane emission from natural gas production and distribution estimated by our inventory is about 20% higher than the EPA GHGI, and in some major natural gas production regions, methane emissions estimated by the EPA GHGI are significantly lower than our inventory. For example, in the Marcellus Shale, our estimated annual methane emission in 2015 is 600 Gg higher than the EPA GHGI. We also ran the GEOS-Chem methane simulation to estimate the methane concentration in the atmosphere with our built inventory, the EPA GHGI and the Edgar v4.2 over the nested North American Domain. These simulation results showed differences in some major gas production regions. The simulated methane concentrations will be compared with the GOSAT satellite data to explore whether our built inventory could potentially improve the prediction of regional methane concentrations in the atmosphere.
NASA Astrophysics Data System (ADS)
Randles, C. A.; Hristov, A. N.; Harper, M.; Meinen, R.; Day, R.; Lopes, J.; Ott, T.; Venkatesh, A.
2017-12-01
In this analysis we used a spatially-explicit, bottom-up approach, based on animal inventories, feed intake, and feed intake-based emission factors to estimate county-level enteric (cattle) and manure (cattle, swine, and poultry) livestock methane emissions for the contiguous United States. Combined enteric and manure emissions were highest for counties in California's Central Valley. Overall, this analysis yielded total livestock methane emissions (8,916 Gg/yr; lower and upper bounds of 6,423 and 11,840 Gg/yr, respectively) for 2012 that are comparable to the current USEPA estimates for 2012 (9,295 Gg/yr) and to estimates from the global gridded Emission Database for Global Atmospheric Research (EDGAR) inventory (8,728 Gg/yr), used previously in a number of top-down studies. However, the spatial distribution of emissions developed in this analysis differed significantly from that of EDGAR. As an example, methane emissions from livestock in Texas and California (highest contributors to the national total) in this study were 36% lesser and 100% greater, respectively, than estimates by EDGAR. Thespatial distribution of emissions in gridded inventories (e.g., EDGAR) likely strongly impacts the conclusions of top-down approaches that use them, especially in the source attribution of resulting (posterior) emissions, and hence conclusions from such studies should be interpreted with caution.
Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.
2013-01-01
1. Quantifying lake biogeochemical processing at broad spatial scales requires that we scale processes along with physical metrics. Past work has primarily scaled lentic processes using estimates of lake surface area. However, many processes important to lakes, such as material, energy and biological fluxes and biogeochemical cycling, scale with lake perimeter. 2. We estimate the total lake perimeter for the contiguous United States (U.S.) and examine the sensitivity of this estimate to measurement resolution. At the original mapping resolution, lakes in the contiguous U.S. have a total perimeter of over 1.8 million km. 3. The change in measured perimeter versus measurement resolution for the contiguous U.S. had a log-log slope (also known as the fractal dimension) of 0.21, generally less than previously reported estimates. With changing observation resolution, total measured perimeter was most sensitive to the inclusion or exclusion of small lakes, not shoreline complexity. 4. The total aquatic–terrestrial interface in lakes is less than one-tenth that of streams and rivers, which collectively account for over 21 million km of shoreline in the contiguous U.S. This study further describes the distribution of lake perimeter and proposes a technique that can contribute to understanding continental-scale processes.
The Decision to Not Invade Baghdad (Persian Gulf War)
2007-04-12
LAWRENCE K. MONTGOMERY, JR. United States Army National Guard Se ni or Se rv ic e Co lle ge DISTRIBUTION STATEMENT A: Approved for Public Release...Distribution is Unlimited. USAWC CLASS OF 2007 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Smart darting diffusion Monte Carlo: Applications to lithium ion-Stockmayer clusters
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Jake, L. C.; Curotto, E.
2016-05-01
In a recent investigation [K. Roberts et al., J. Chem. Phys. 136, 074104 (2012)], we have shown that, for a sufficiently complex potential, the Diffusion Monte Carlo (DMC) random walk can become quasiergodic, and we have introduced smart darting-like moves to improve the sampling. In this article, we systematically characterize the bias that smart darting moves introduce in the estimate of the ground state energy of a bosonic system. We then test a simple approach to eliminate completely such bias from the results. The approach is applied for the determination of the ground state of lithium ion-n-dipoles clusters in the n = 8-20 range. For these, the smart darting diffusion Monte Carlo simulations find the same ground state energy and mixed-distribution as the traditional approach for n < 14. In larger systems we find that while the ground state energies agree quantitatively with or without smart darting moves, the mixed-distributions can be significantly different. Some evidence is offered to conclude that introducing smart darting-like moves in traditional DMC simulations may produce a more reliable ground state mixed-distribution.
The private forest landowners of Michigan.
Eugene M. Carpenter; Mark H. Hansen
1985-01-01
Estimates the number and distribution of nonindustrial private forest landowners in Michigan by size class and owner attitudes and objectives concerning forest ownership, management, and use. Provides 57 tables relating to owner and property characteristics for the state and its Forest Survey Units.
Evaluation of service-induced residual stresses in railroad commuter car wheels
DOT National Transportation Integrated Search
1999-11-01
Analyses of the effects of service conditions on the distribution of : residual stresses in railroad commuter car wheels are presented. Novel : software has been applied to estimate the effects of service conditions : on the as-manufactured state of ...
Wisconsin private timberland owners: 1997.
Earl C. Leatherberry
2001-01-01
Identifies and profiles Wisconsin's private timberland owners. Estimates the number and distribution of private timberland owners by owner attitudes and objectives concerning forest ownership; management, and use. Provides 45 tables relating owner and property characteristics for the State and its five survey units.
Song, Qing-Kun; Li, Jing; Huang, Rong; Fan, Jin-Hu; Zheng, Rong-Shou; Zhang, Bao-Ning; Zhang, Bin; Tang, Zhong-Hua; Xie, Xiao-Ming; Yang, Hong-Jian; He, Jian-Jun; Li, Hui; Li, Jia-Yuan; Qiao, You-Lin; Chen, Wan-Qing
2014-01-01
The study aimed to describe the age distribution of breast cancer diagnosis among Chinese females for comparison with the United States and the European Union, and provide evidence for the screening target population in China. Median age was estimated from hospital databases from 7 tertiary hospitals in China. Population-based data in China, United States and European Union was extracted from the National Central Cancer Registry, SEER program and GLOBOCAN 2008, respectively. Age-standardized distribution of breast cancer at diagnosis in the 3 areas was estimated based on the World Standard Population 2000. The median age of breast cancer at diagnosis was around 50 in China, nearly 10 years earlier than United States and European Union. The diagnosis age in China did not vary between subgroups of calendar year, region and pathological characteristics. With adjustment for population structure, median age of breast cancer at diagnosis was 50~54 in China, but 55~59 in United States and European Union. The median diagnosis age of female breast cancer is much earlier in China than in the United States and the European Union pointing to racial differences in genetics and lifestyle. Screening programs should start at an earlier age for Chinese women and age disparities between Chinese and Western women warrant further studies.
Horvath, Isabelle R; Chatterjee, Siddharth G
2018-05-01
The recently derived steady-state generalized Danckwerts age distribution is extended to unsteady-state conditions. For three different wind speeds used by researchers on air-water heat exchange on the Heidelberg Aeolotron, calculations reveal that the distribution has a sharp peak during the initial moments, but flattens out and acquires a bell-shaped character with process time, with the time taken to attain a steady-state profile being a strong and inverse function of wind speed. With increasing wind speed, the age distribution narrows significantly, its skewness decreases and its peak becomes larger. The mean eddy renewal time increases linearly with process time initially but approaches a final steady-state value asymptotically, which decreases dramatically with increased wind speed. Using the distribution to analyse the transient absorption of a gas into a large body of liquid, assuming negligible gas-side mass-transfer resistance, estimates are made of the gas-absorption and dissolved-gas transfer coefficients for oxygen absorption in water at 25°C for the three different wind speeds. Under unsteady-state conditions, these two coefficients show an inverse behaviour, indicating a heightened accumulation of dissolved gas in the surface elements, especially during the initial moments of absorption. However, the two mass-transfer coefficients start merging together as the steady state is approached. Theoretical predictions of the steady-state mass-transfer coefficient or transfer velocity are in fair agreement (average absolute error of prediction = 18.1%) with some experimental measurements of the same for the nitrous oxide-water system at 20°C that were made in the Heidelberg Aeolotron.
A probabilistic estimate of maximum acceleration in rock in the contiguous United States
Algermissen, Sylvester Theodore; Perkins, David M.
1976-01-01
This paper presents a probabilistic estimate of the maximum ground acceleration to be expected from earthquakes occurring in the contiguous United States. It is based primarily upon the historic seismic record which ranges from very incomplete before 1930 to moderately complete after 1960. Geologic data, primarily distribution of faults, have been employed only to a minor extent, because most such data have not been interpreted yet with earthquake hazard evaluation in mind.The map provides a preliminary estimate of the relative hazard in various parts of the country. The report provides a method for evaluating the relative importance of the many parameters and assumptions in hazard analysis. The map and methods of evaluation described reflect the current state of understanding and are intended to be useful for engineering purposes in reducing the effects of earthquakes on buildings and other structures.Studies are underway on improved methods for evaluating the relativ( earthquake hazard of different regions. Comments on this paper are invited to help guide future research and revisions of the accompanying map.The earthquake hazard in the United States has been estimated in a variety of ways since the initial effort by Ulrich (see Roberts and Ulrich, 1950). In general, the earlier maps provided an estimate of the severity of ground shaking or damage but the frequency of occurrence of the shaking or damage was not given. Ulrich's map showed the distribution of expected damage in terms of no damage (zone 0), minor damage (zone 1), moderate damage (zone 2), and major damage (zone 3). The zones were not defined further and the frequency of occurrence of damage was not suggested. Richter (1959) and Algermissen (1969) estimated the ground motion in terms of maximum Modified Mercalli intensity. Richter used the terms "occasional" and "frequent" to characterize intensity IX shaking and Algermissen included recurrence curves for various parts of the country in the paper accompanying his map.The first probabilistic hazard maps covering portions of the United States were by Milne and Davenport (1969a). Recently, Wiggins, Hirshberg and Bronowicki (1974) prepared a probabilistic map of maximum particle velocity and Modified Mercalli intensity for the entire United States. The maps are based on an analysis of the historical seismicity. In general, geological data were not incorporated into the development of the maps.
Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.
Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo
2012-01-01
The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
Noninformative prior in the quantum statistical model of pure states
NASA Astrophysics Data System (ADS)
Tanaka, Fuyuhiko
2012-06-01
In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.
Estimating transition probabilities in unmarked populations --entropy revisited
Cooch, E.G.; Link, W.A.
1999-01-01
The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.
Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.
1988-01-01
If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
NASA Astrophysics Data System (ADS)
Vaishnav, Parth; Horner, Nathaniel; Azevedo, Inês L.
2017-09-01
We estimate the lifetime magnitude and distribution of the private and public benefits and costs of currently installed distributed solar PV systems in the United States. Using data for recently-installed systems, we estimate the balance of benefits and costs associated with installing a non-utility solar PV system today. We also study the geographical distribution of the various subsidies that are made available to owners of rooftop solar PV systems, and compare it to distributions of population and income. We find that, after accounting for federal subsidies and local rebates and assuming a discount rate of 7%, the private benefits of new installations will exceed private costs only in seven of the 19 states for which we have data and only if customers can sell excess power to the electric grid at the retail price. These states are characterized by abundant sunshine (California, Texas and Nevada) or by high electricity prices (New York). Public benefits from reduced air pollution and climate change impact exceed the costs of the various subsidies offered system owners for less than 10% of the systems installed, even assuming a 2% discount rate. Subsidies flowed disproportionately to counties with higher median incomes in 2006. In 2014, the distribution of subsidies was closer to that of population income, but subsidies still flowed disproportionately to the better-off. The total, upfront, subsidy per kilowatt of installed capacity has fallen from 5200 in 2006 to 1400 in 2014, but the absolute magnitude of subsidy has soared as installed capacity has grown explosively. We see considerable differences in the balance of costs and benefits even within states, indicating that local factors such as system price and solar resource are important, and that policies (e.g. net metering) could be made more efficient by taking local conditions into account.
Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza
2011-01-01
Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a formation estimation algorithm that is modular and robust to variations in the topology and link properties of the underlying formation network.
Use of the Magnetic Field for Improving Gyroscopes’ Biases Estimation
Munoz Diaz, Estefania; de Ponte Müller, Fabian; García Domínguez, Juan Jesús
2017-01-01
An accurate orientation is crucial to a satisfactory position in pedestrian navigation. The orientation estimation, however, is greatly affected by errors like the biases of gyroscopes. In order to minimize the error in the orientation, the biases of gyroscopes must be estimated and subtracted. In the state of the art it has been proposed, but not proved, that the estimation of the biases can be accomplished using magnetic field measurements. The objective of this work is to evaluate the effectiveness of using magnetic field measurements to estimate the biases of medium-cost micro-electromechanical sensors (MEMS) gyroscopes. We carry out the evaluation with experiments that cover both, quasi-error-free turn rate and magnetic measurements and medium-cost MEMS turn rate and magnetic measurements. The impact of different homogeneous magnetic field distributions and magnetically perturbed environments is analyzed. Additionally, the effect of the successful biases subtraction on the orientation and the estimated trajectory is detailed. Our results show that the use of magnetic field measurements is beneficial to the correct biases estimation. Further, we show that different magnetic field distributions affect differently the biases estimation process. Moreover, the biases are likewise correctly estimated under perturbed magnetic fields. However, for indoor and urban scenarios the biases estimation process is very slow. PMID:28398232
Rainfall: State of the Science
NASA Astrophysics Data System (ADS)
Testik, Firat Y.; Gebremichael, Mekonnen
Rainfall: State of the Science offers the most up-to-date knowledge on the fundamental and practical aspects of rainfall. Each chapter, self-contained and written by prominent scientists in their respective fields, provides three forms of information: fundamental principles, detailed overview of current knowledge and description of existing methods, and emerging techniques and future research directions. The book discusses • Rainfall microphysics: raindrop morphodynamics, interactions, size distribution, and evolution • Rainfall measurement and estimation: ground-based direct measurement (disdrometer and rain gauge), weather radar rainfall estimation, polarimetric radar rainfall estimation, and satellite rainfall estimation • Statistical analyses: intensity-duration-frequency curves, frequency analysis of extreme events, spatial analyses, simulation and disaggregation, ensemble approach for radar rainfall uncertainty, and uncertainty analysis of satellite rainfall products The book is tailored to be an indispensable reference for researchers, practitioners, and graduate students who study any aspect of rainfall or utilize rainfall information in various science and engineering disciplines.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Activity flow over resting-state networks shapes cognitive task activations.
Cole, Michael W; Ito, Takuya; Bassett, Danielle S; Schultz, Douglas H
2016-12-01
Resting-state functional connectivity (FC) has helped reveal the intrinsic network organization of the human brain, yet its relevance to cognitive task activations has been unclear. Uncertainty remains despite evidence that resting-state FC patterns are highly similar to cognitive task activation patterns. Identifying the distributed processes that shape localized cognitive task activations may help reveal why resting-state FC is so strongly related to cognitive task activations. We found that estimating task-evoked activity flow (the spread of activation amplitudes) over resting-state FC networks allowed prediction of cognitive task activations in a large-scale neural network model. Applying this insight to empirical functional MRI data, we found that cognitive task activations can be predicted in held-out brain regions (and held-out individuals) via estimated activity flow over resting-state FC networks. This suggests that task-evoked activity flow over intrinsic networks is a large-scale mechanism explaining the relevance of resting-state FC to cognitive task activations.
Activity flow over resting-state networks shapes cognitive task activations
Cole, Michael W.; Ito, Takuya; Bassett, Danielle S.; Schultz, Douglas H.
2016-01-01
Resting-state functional connectivity (FC) has helped reveal the intrinsic network organization of the human brain, yet its relevance to cognitive task activations has been unclear. Uncertainty remains despite evidence that resting-state FC patterns are highly similar to cognitive task activation patterns. Identifying the distributed processes that shape localized cognitive task activations may help reveal why resting-state FC is so strongly related to cognitive task activations. We found that estimating task-evoked activity flow (the spread of activation amplitudes) over resting-state FC networks allows prediction of cognitive task activations in a large-scale neural network model. Applying this insight to empirical functional MRI data, we found that cognitive task activations can be predicted in held-out brain regions (and held-out individuals) via estimated activity flow over resting-state FC networks. This suggests that task-evoked activity flow over intrinsic networks is a large-scale mechanism explaining the relevance of resting-state FC to cognitive task activations. PMID:27723746
NASA Astrophysics Data System (ADS)
Cao, B.; Domke, G. M.; Russell, M.; McRoberts, R. E.; Walters, B. F.
2017-12-01
Forest ecosystems contribute substantially to carbon (C) storage. The dynamics of litter decomposition, translocation and stabilization into soil layers are essential processes in the functioning of forest ecosystems, as they control the cycling of soil organic matter and the accumulation and release of C to the atmosphere. Therefore, the spatial distributions of litter and soil C stocks are important in greenhouse gas estimation and reporting and inform land management decisions, policy, and climate change mitigation strategies. In this study, we explored the effects of spatial aggregation of climatic, biotic, topographic and soil input data on national estimates of litter and soil C stocks and characterized the spatial distribution of litter and soil C stocks in the conterminous United States. Data from the Forest Inventory and Analysis (FIA) program within the US Forest Service were used with vegetation phenology data estimated from LANDSAT imagery (30 m) and raster data describing relevant environmental parameters (e.g. temperature, precipitation, topographic properties) for the entire conterminous US. Litter and soil C stocks were estimated and mapped through geostatistical analysis and statistical uncertainty bounds on the pixel level predictions were constructed using a Monte Carlo-bootstrap technique, by which credible variance estimates for the C stocks were calculated. The sensitivity of model estimates to spatial aggregation depends on geographic region. Further, using long-term (30-year) climate averages during periods with strong climatic trends results in large differences in litter and soil C stock estimates. In addition, results suggest that local topographic aspect is an important variable in litter and soil C estimation at the continental scale.
Private timberland owners of Michigan, 1994.
Earl C. Leatherberry; Neal P. Kingsley; Thomas W. Birch
1998-01-01
Identifies and profiles Michigan's private timberland owners. Estimates the number and distribution of private timberland owners by owner attitudes and objectives concerning forest ownership, management, and use. Provides 45 tables relating to owner and property characteristics for the state and its four survey units.
2015-07-14
2008). Sequential Monte Carlo smoothing with applica- tion to parameter estimation in non-linear state space models. Bernoulli , 14, 155-179. [22] Parikh...1BcΣ(θ?,δ)(Θ) ] = o ( τk ) for all k ∈ N. (45) The other integral is over the ball BΣ(θ?, δ), i.e. close to θ?; hence we perform a Taylor expansion of...1] R3 (θ, θ?) = ∑ |α|=4 ∂αϕ (θ? + cθ (θ − θ?)) (θ − θ?)α α! . 26 We now use the symmetry of the normal distribution N ( θ?, τ2Σ ) on the ball BΣ(θ
Hydrodynamic Attraction of Swimming Microorganisms by Surfaces
NASA Astrophysics Data System (ADS)
Berke, Allison P.; Turner, Linda; Berg, Howard C.; Lauga, Eric
2008-07-01
Cells swimming in confined environments are attracted by surfaces. We measure the steady-state distribution of smooth-swimming bacteria (Escherichia coli) between two glass plates. In agreement with earlier studies, we find a strong increase of the cell concentration at the boundaries. We demonstrate theoretically that hydrodynamic interactions of the swimming cells with solid surfaces lead to their reorientation in the direction parallel to the surfaces, as well as their attraction by the closest wall. A model is derived for the steady-state distribution of swimming cells, which compares favorably with our measurements. We exploit our data to estimate the flagellar propulsive force in swimming E. coli.
State-of-the art of dc components for secondary power distribution of Space Station Freedom
NASA Technical Reports Server (NTRS)
Krauthamer, Stanley; Gangal, Mukund; Das, Radhe S. L.
1991-01-01
120-V dc secondary power distribution has been selected for Space Station Freedom. State-of-the art components and subsystems are examined in terms of performance, size, and topology. One of the objectives of this work is to inform Space Station users what is available in power supplies and power control devices. The other objective is to stimulate interest in the component industry so that more focused product development can be started. Based on results of this study, it is estimated that, with some redesign, modifications, and space qualification, may of these components may be applied to Space Station needs.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Schwendicke, Falk; Jäger, Ralf; Hoffmann, Wolfgang; Jordan, Rainer A; van den Berg, Neeltje
2016-09-01
Assessing the spatial distribution of oral morbidity-related demand and the workforce-related supply is relevant for planning dental services. We aimed to establish and validate a model for estimating the spatially specific demand and supply. This model was then applied to compare demand-supply ratios in 2001 and 2011 in the federal state of Mecklenburg-Vorpommern (Northern Germany). The spatial units were zip code areas. Demand per area was estimated by linking population-specific oral morbidities to working times via insurance claim data. Estimated demand was validated against the provided demand in 2001 and 2011. Supply was calculated for both years using cohort data from the dentist register. The ratio of demand and supply was geographically mapped and its distribution between areas assessed using the Gini coefficient. Between 2001 and 2011, a significant decrease of the general population (-7.0 percent), the annual demand (-13.1 percent), and the annual supply (-12.9 percent) was recorded. The estimated demands were nearly (2001: -4 percent) and completely (2011: ±0 percent) congruent with provided demands. The average demand-supply-ratio did not change significantly between 2001 and 2011 (P > 0.05), but was increasingly unequally distributed. In both years, few areas were over-serviced, while many were under-serviced. The established model can be used to estimate spatially specific demand and supply. © 2016 American Association of Public Health Dentistry.
Customer premises services market demand assessment 1980 - 2000. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Gamble, R. B.; Saporta, L.; Heidenrich, G. A.
1983-01-01
Estimates of market demand for domestic civilian telecommunications services for the years 1980 to 2000 are provided. Overall demand, demand or satellite services, demand for satellite delivered Customer Premises Service (CPS), and demand for 30/20 GHz Customer Premises Services are covered. Emphasis is placed on the CPS market and demand is segmented by market, by service, by user class and by geographic region. Prices for competing services are discussed and the distribution of traffic with respect to distance is estimated. A nationwide traffic distribution model for CPS in terms of demand for CPS traffic and earth stations for each of the major SMSAs in the United States are provided.
Aerodynamic and heat transfer analysis of the low aspect ratio turbine
NASA Astrophysics Data System (ADS)
Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.
1987-06-01
The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.
Renewable Energy Power Generation Estimation Using Consensus Algorithm
NASA Astrophysics Data System (ADS)
Ahmad, Jehanzeb; Najm-ul-Islam, M.; Ahmed, Salman
2017-08-01
At the small consumer level, Photo Voltaic (PV) panel based grid tied systems are the most common form of Distributed Energy Resources (DER). Unlike wind which is suitable for only selected locations, PV panels can generate electricity almost anywhere. Pakistan is currently one of the most energy deficient countries in the world. In order to mitigate this shortage the Government has recently announced a policy of net-metering for residential consumers. After wide spread adoption of DERs, one of the issues that will be faced by load management centers would be accurate estimate of the amount of electricity being injected in the grid at any given time through these DERs. This becomes a critical issue once the penetration of DER increases beyond a certain limit. Grid stability and management of harmonics becomes an important consideration where electricity is being injected at the distribution level and through solid state controllers instead of rotating machinery. This paper presents a solution using graph theoretic methods for the estimation of total electricity being injected in the grid in a wide spread geographical area. An agent based consensus approach for distributed computation is being used to provide an estimate under varying generation conditions.
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Zhao, G.; Chu, R.; Zhang, T.; Li, J.; Shen, J.; Wu, Z.
2011-03-01
During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER), a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, the latest state-of-the-art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modelling behaviour was not well done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the raindrop and the diameter (mm) of a raindrop: v(D) = 4.67D0.53. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R (ZH) is most sensitive to variations in DSD and the estimator R (KDP, ZH, ZDR) is the best estimator for estimating the rain rate. An X-band polarimetric radar (714XDP) is used for verifying these estimators. The lowest sensitivity of the rain rate estimator R (KDP, ZH, ZDR) to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes KDP, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross-section, which contributes to ZH, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, KDP is less sensitive to DSD variations than ZH.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Heterogeneous Costs of Alcohol and Drug Problems Across Cities and Counties in California
Miller, Ted R.; Nygaard, Peter; Gaidus, Andrew; Grube, Joel W.; Ponicki, William R.; Lawrence, Bruce A.; Gruenewald, Paul J.
2017-01-01
Background Estimates of economic and social costs related to alcohol and other drug (AOD) use and abuse are usually made at state and national levels. Ecological analyses demonstrate, however, that substantial variations exist in the incidence and prevalence of AOD use and problems including impaired driving, violence, and chronic disease between smaller geopolitical units like counties and cities. This study examines the ranges of these costs across counties and cities in California. Methods We used estimates of the incidence and prevalence of AOD use, abuse and related problems to calculate costs in 2010 dollars for all 58 counties and an ecological sample of 50 cities with populations between 50,000 and 500,000 persons in California. The estimates were built from archival and public-use survey data collected at state, county and city-levels over the years from 2009 to 2010. Results Costs related to alcohol use and related problems exceeded those related to illegal drugs across all counties and most cities in the study. Substantial heterogeneities in costs were observed between cities within counties. Conclusions AOD costs are heterogeneously distributed across counties and cities, reflecting the degree to which different populations are engaged in use and abuse across the state. These findings provide a strong argument for the distribution of treatment and prevention resources proportional to need. PMID:28208210
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Deep Water Ocean Acoustics (DWOA): The Philippine Sea, OBSANP, and THAAW Experiments
2015-09-30
the travel times. 4 The ocean state estimates were then re-computed to fit the acoustic travel times as integrals of the sound speed, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Deep Water Ocean Acoustics (DWOA): The Philippine Sea...deep-water acoustic propagation and ambient noise has been collected in a wide variety of environments over the last few years with ONR support
2013-01-01
Wei1, ZHANG Xuefeng1 1 Key Laboratory of State Oceanic Adminstration for Marine Environmental Information Technology, NationalMarine Data and...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES
NASA Technical Reports Server (NTRS)
Murty, A. N.
1976-01-01
A straightforward self-consistent method was developed to estimate solid state electrostatic potentials, fields and field gradients in ionic solids. The method is a direct practical application of basic electrostatics to solid state and also helps in the understanding of the principles of crystal structure. The necessary mathematical equations, derived from first principles, were presented and the systematic computational procedure developed to arrive at the solid state electrostatic field gradients values was given.
Chemical state of fission products in irradiated uranium carbide fuel
NASA Astrophysics Data System (ADS)
Arai, Yasuo; Iwai, Takashi; Ohmichi, Toshihiko
1987-12-01
The chemical state of fission products in irradiated uranium carbide fuel has been estimated by equilibrium calculation using the SOLGASMIX-PV program. Solid state fission products are distributed to the fuel matrix, ternary compounds, carbides of fission products and intermetallic compounds among the condensed phases appearing in the irradiated uranium carbide fuel. The chemical forms are influenced by burnup as well as stoichiometry of the fuel. The results of the present study almost agree with the experimental ones reported for burnup simulated carbides.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
A Portuguese value set for the SF-6D.
Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna
2010-08-01
The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.
τ → f1(1285) π-ν_{τ} decay in the extended Nambu-Jona-Lasinio model
NASA Astrophysics Data System (ADS)
Volkov, M. K.; Pivovarov, A. A.; Osipov, A. A.
2018-04-01
Within the framework of the extended Nambu-Jona-Lasinio model, we calculate the matrix element of the τ → f1(1285) π- ν_{τ} decay, obtain the invariant mass distribution of the f1π-system and estimate the branching ratio Br(τ → f1 π-ν_{τ}) = 4.0× 10^{-4}. The two types of contributions are considered: the contact interaction, and the axial-vector IG(J^{PC})=1-(1^{++}) resonance exchange. The latter includes the ground a1(1260) state, and its first radially excited state, a1(1640). The corrections caused by the π-a1 transitions are taken into account. Our estimate is in a good agreement with the latest empirical result Br(τ → f1 π- ν_{τ})=(3.9± 0.5)× 10^{-4}. The distribution function obtained for the decay τ → f1(1285) π- ν_{τ} shows a clear signal of a1(1640) resonance which should be compared with future experimental data including our estimate of the decay width Γ (a1(1640) → f1 π)=14.1 MeV.
McCauley, Erin J
2017-12-01
To estimate the cumulative probability (c) of arrest by age 28 years in the United States by disability status, race/ethnicity, and gender. I estimated cumulative probabilities through birth cohort life tables with data from the National Longitudinal Survey of Youth, 1997. Estimates demonstrated that those with disabilities have a higher cumulative probability of arrest (c = 42.65) than those without (c = 29.68). The risk was disproportionately spread across races/ethnicities, with Blacks with disabilities experiencing the highest cumulative probability of arrest (c = 55.17) and Whites without disabilities experiencing the lowest (c = 27.55). Racial/ethnic differences existed by gender as well. There was a similar distribution of disability types across race/ethnicity, suggesting that the racial/ethnic differences in arrest may stem from racial/ethnic inequalities as opposed to differential distribution of disability types. The experience of arrest for those with disabilities was higher than expected. Police officers should understand how disabilities may affect compliance and other behaviors, and likewise how implicit bias and structural racism may affect reactions and actions of officers and the systems they work within in ways that create inequities.
Hosseinbor, Ameer Pasha; Chung, Moo K; Wu, Yu-Chien; Alexander, Andrew L
2011-01-01
The estimation of the ensemble average propagator (EAP) directly from q-space DWI signals is an open problem in diffusion MRI. Diffusion spectrum imaging (DSI) is one common technique to compute the EAP directly from the diffusion signal, but it is burdened by the large sampling required. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed. One, in particular, is Diffusion Propagator Imaging (DPI) which is based on the Laplace's equation estimation of diffusion signal for each shell acquisition. Viewed intuitively in terms of the heat equation, the DPI solution is obtained when the heat distribution between temperatuere measurements at each shell is at steady state. We propose a generalized extension of DPI, Bessel Fourier Orientation Reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition. That is, the heat distribution between shell measurements is no longer at steady state. In addition to being analytical, the BFOR solution also includes an intrinsic exponential smootheing term. We illustrate the effectiveness of the proposed method by showing results on both synthetic and real MR datasets.
A generalized gamma mixture model for ultrasonic tissue characterization.
Vegas-Sanchez-Ferrero, Gonzalo; Aja-Fernandez, Santiago; Palencia, Cesar; Martin-Fernandez, Marcos
2012-01-01
Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.
A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization
Palencia, Cesar; Martin-Fernandez, Marcos
2012-01-01
Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images. PMID:23424602
NASA Technical Reports Server (NTRS)
Andrews, J.
1976-01-01
ECON's distribution benefits model has been applied to worldwide distribution of corn, rye, oats, barley, soybeans, and sugar, and to domestic distribution of potatoes. The results indicate that a LANDSAT system with thematic mapper might produce benefits to the United States of about $119 million per year, due to more efficient distribution of these commodities. The benefits to the rest of the world have also been calculated, with a breakdown between trade benefits and those associated with internal use patterns. By far the greatest part of the estimated benefits are assigned to corn, with smaller benefits assigned to soybeans and the small grains (rye, oats, and barley).
The Ghost shrimp, (Neotrypaea californiensis) are burrowers, which have a wide demographic distribution along the United States Pacific Coast. Our study used genetic analysis to estimate the source populations of larvae recruiting into estuaries to allow a greater understanding ...
EXPOSURE TO PESTICIDES BY MEDIUM AND ROUTE: THE 90TH PERCENTILE AND RELATED UNCERTAINTIES
This study investigates distributions of exposure to chlorpyrifos and diazinon using the database generated in the state of Arizona by the National Human Exposure Assessment Survey (NHEXAS-AZ). Exposure to pesticide and associated uncertainties are estimated using probabilistic...
77 FR 33156 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
... proposed information collection. This is a new collection to provide state-level estimates of the prevalence and geographic distribution of School Food Authorities (SFAs) conducting Farm to School activities...://www.regulations.gov ), which provides online instructions for submitting comments electronically. FOR...
Lee, Hyunyeol; Sohn, Chul-Ho; Park, Jaeseok
2017-07-01
To develop a current-induced, alternating reversed dual-echo-steady-state-based magnetic resonance electrical impedance tomography for joint estimation of tissue relaxation and electrical properties. The proposed method reverses the readout gradient configuration of conventional, in which steady-state-free-precession (SSFP)-ECHO is produced earlier than SSFP-free-induction-decay (FID) while alternating current pulses are applied in between the two SSFPs to secure high sensitivity of SSFP-FID to injection current. Additionally, alternating reversed dual-echo-steady-state signals are modulated by employing variable flip angles over two orthogonal injections of current pulses. Ratiometric signal models are analytically constructed, from which T 1 , T 2 , and current-induced B z are jointly estimated by solving a nonlinear inverse problem for conductivity reconstruction. Numerical simulations and experimental studies are performed to investigate the feasibility of the proposed method in estimating relaxation parameters and conductivity. The proposed method, if compared with conventional magnetic resonance electrical impedance tomography, enables rapid data acquisition and simultaneous estimation of T 1 , T 2 , and current-induced B z , yielding a comparable level of signal-to-noise ratio in the parameter estimates while retaining a relative conductivity contrast. We successfully demonstrated the feasibility of the proposed method in jointly estimating tissue relaxation parameters as well as conductivity distributions. It can be a promising, rapid imaging strategy for quantitative conductivity estimation. Magn Reson Med 78:107-120, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Distribution of soil organic carbon in the conterminous United States
Bliss, Norman B.; Waltman, Sharon; West, Larry T.; Neale, Anne; Mehaffey, Megan; Hartemink, Alfred E.; McSweeney, Kevin M.
2014-01-01
The U.S. Soil Survey Geographic (SSURGO) database provides detailed soil mapping for most of the conterminous United States (CONUS). These data have been used to formulate estimates of soil carbon stocks, and have been useful for environmental models, including plant productivity models, hydrologic models, and ecological models for studies of greenhouse gas exchange. The data were compiled by the U.S. Department of Agriculture Natural Resources Conservation Service (NRCS) from 1:24,000-scale or 1:12,000-scale maps. It was found that the total soil organic carbon stock in CONUS to 1 m depth is 57 Pg C and for the total profile is 73 Pg C, as estimated from SSURGO with data gaps filled from the 1:250,000-scale Digital General Soil Map. We explore the non-linear distribution of soil carbon on the landscape and with depth in the soil, and the implications for sampling strategies that result from the observed soil carbon variability.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Pérez-González, Abel; Baptista, Renner L. C.
2016-01-01
Abstract Background There has never been any published work about the diversity of spiders in the city of Rio de Janeiro using analytical tools to measure diversity. The only available records for spider communities in nearby areas indicate 308 species in the National Park of Tijuca and 159 species in Marapendi Municipal Park. These numbers are based on a rapid survey and on an one-year survey respectively. New information This study provides a more thorough understanding of how the spider species are distributed at Pedra Branca State Park. We report a total of 14,626 spider specimens recorded from this park, representing 49 families and 373 species or morphospecies, including at least 73 undescribed species. Also, the distribution range of 45 species was expanded, and species accumulation curves estimate that there is a minimum of 388 (Bootstrap) and a maximum of 468 species (Jackknife2) for the sampled areas. These estimates indicates that the spider diversity may be higher than observed. PMID:26929710
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.
Air quality impacts of projections of natural gas-fired distributed generation
NASA Astrophysics Data System (ADS)
Horne, Jeremy R.; Carreras-Sospedra, Marc; Dabdub, Donald; Lemar, Paul; Nopmongcol, Uarporn; Shah, Tejas; Yarwood, Greg; Young, David; Shaw, Stephanie L.; Knipping, Eladio M.
2017-11-01
This study assesses the potential impacts on emissions and air quality from the increased adoption of natural gas-fired distributed generation of electricity (DG), including displacement of power from central power generation, in the contiguous United States. The study includes four major tasks: (1) modeling of distributed generation market penetration; (2) modeling of central power generation systems; (3) modeling of spatially and temporally resolved emissions; and (4) photochemical grid modeling to evaluate the potential air quality impacts of increased DG penetration, which includes both power-only DG and combined heat and power (CHP) units, for 2030. Low and high DG penetration scenarios estimate the largest penetration of future DG units in three regions - New England, New York, and California. Projections of DG penetration in the contiguous United States estimate 6.3 GW and 24 GW of market adoption in 2030 for the low DG penetration and high DG penetration scenarios, respectively. High DG penetration (all of which is natural gas-fired) serves to offset 8 GW of new natural gas combined cycle (NGCC) units, and 19 GW of solar photovoltaic (PV) installations by 2030. In all scenarios, air quality in the central United States and the northwest remains unaffected as there is little to no DG penetration in those states. California and several states in the northeast are the most impacted by emissions from DG units. Peak increases in maximum daily 8-h average ozone concentrations exceed 5 ppb, which may impede attainment of ambient air quality standards. Overall, air quality impacts from DG vary greatly based on meteorological conditions, proximity to emissions sources, the number and type of DG installations, and the emissions factors used for DG units.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Sexual differentiation in the distribution potential of northern jaguars (Panthera onca)
Boydston, Erin E.; Lopez Gonzalez, Carlos A.
2005-01-01
We estimated the potential geographic distribution of jaguars in the southwestern United States and northwestern Mexico by modeling the jaguar ecological niche from occurrence records. We modeled separately the distribution of males and females, assuming records of females probably represented established home ranges while male records likely included dispersal movements. The predicted distribution for males was larger than that for females. Eastern Sonora appeared capable for supporting male and female jaguars with potential range expansion into southeastern Arizona. New Mexico and Chihuahua contained environmental characteristics primarily limited to the male niche and thus may be areas into which males occasionally disperse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
The report is divided into the following sections: (1) Introduction; (2) Conclusions and Recommendations; (3) Existing Conditions and Facilities for a Fuel Distribution Center; (4) Pacific Ocean Regional Tuna Fisheries and Resources; (5) Fishing Effort in the FSMEEZ 1992-1994; (6) Current Transshipping Operations in the Western Pacific Ocean; (7) Current and Probale Bunkering Practices of United States, Japanese, Koren, and Taiwanese Offshore-Based Vessels Operating in FSM and Adjacent Waters; (8) Shore-Based Fish-Handling/Processing; (9) Fuels Forecast; (10) Fuel Supply, Storage and Distribution; (11) Cost Estimates; (12) Economic Evaluation of Fuel Supply, Storage and Distribution.
The Distribution of Climate Change Public Opinion in Canada.
Mildenberger, Matto; Howe, Peter; Lachapelle, Erick; Stokes, Leah; Marlon, Jennifer; Gravelle, Timothy
2016-01-01
While climate scientists have developed high resolution data sets on the distribution of climate risks, we still lack comparable data on the local distribution of public climate change opinions. This paper provides the first effort to estimate local climate and energy opinion variability outside the United States. Using a multi-level regression and post-stratification (MRP) approach, we estimate opinion in federal electoral districts and provinces. We demonstrate that a majority of the Canadian public consistently believes that climate change is happening. Belief in climate change's causes varies geographically, with more people attributing it to human activity in urban as opposed to rural areas. Most prominently, we find majority support for carbon cap and trade policy in every province and district. By contrast, support for carbon taxation is more heterogeneous. Compared to the distribution of US climate opinions, Canadians believe climate change is happening at higher levels. This new opinion data set will support climate policy analysis and climate policy decision making at national, provincial and local levels.
The Distribution of Climate Change Public Opinion in Canada
Gravelle, Timothy
2016-01-01
While climate scientists have developed high resolution data sets on the distribution of climate risks, we still lack comparable data on the local distribution of public climate change opinions. This paper provides the first effort to estimate local climate and energy opinion variability outside the United States. Using a multi-level regression and post-stratification (MRP) approach, we estimate opinion in federal electoral districts and provinces. We demonstrate that a majority of the Canadian public consistently believes that climate change is happening. Belief in climate change’s causes varies geographically, with more people attributing it to human activity in urban as opposed to rural areas. Most prominently, we find majority support for carbon cap and trade policy in every province and district. By contrast, support for carbon taxation is more heterogeneous. Compared to the distribution of US climate opinions, Canadians believe climate change is happening at higher levels. This new opinion data set will support climate policy analysis and climate policy decision making at national, provincial and local levels. PMID:27486659
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Chiral perturbation theory and nucleon-pion-state contaminations in lattice QCD
NASA Astrophysics Data System (ADS)
Bär, Oliver
2017-05-01
Multiparticle states with additional pions are expected to be a non-negligible source of excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory can be employed to calculate the contamination due to two-particle nucleon-pion-states in various nucleon observables. Leading order results are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions (quark momentum fraction, helicity and transversity moment). Taking into account phenomenological results for the charges and moments the impact of the nucleon-pion-states on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution results in an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. The source-sink separations accessible in contemporary lattice simulations are found to be too small for chiral perturbation theory to be directly applicable.
Wei, Shaoceng; Kryscio, Richard J.
2015-01-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001
Wei, Shaoceng; Kryscio, Richard J
2016-12-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.
Horvath, Isabelle R.
2018-01-01
The recently derived steady-state generalized Danckwerts age distribution is extended to unsteady-state conditions. For three different wind speeds used by researchers on air–water heat exchange on the Heidelberg Aeolotron, calculations reveal that the distribution has a sharp peak during the initial moments, but flattens out and acquires a bell-shaped character with process time, with the time taken to attain a steady-state profile being a strong and inverse function of wind speed. With increasing wind speed, the age distribution narrows significantly, its skewness decreases and its peak becomes larger. The mean eddy renewal time increases linearly with process time initially but approaches a final steady-state value asymptotically, which decreases dramatically with increased wind speed. Using the distribution to analyse the transient absorption of a gas into a large body of liquid, assuming negligible gas-side mass-transfer resistance, estimates are made of the gas-absorption and dissolved-gas transfer coefficients for oxygen absorption in water at 25°C for the three different wind speeds. Under unsteady-state conditions, these two coefficients show an inverse behaviour, indicating a heightened accumulation of dissolved gas in the surface elements, especially during the initial moments of absorption. However, the two mass-transfer coefficients start merging together as the steady state is approached. Theoretical predictions of the steady-state mass-transfer coefficient or transfer velocity are in fair agreement (average absolute error of prediction = 18.1%) with some experimental measurements of the same for the nitrous oxide–water system at 20°C that were made in the Heidelberg Aeolotron. PMID:29892429
NASA Astrophysics Data System (ADS)
Zhao, G.; Chu, R.; Li, X.; Zhang, T.; Shen, J.; Wu, Z.
2009-09-01
During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER), a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, a latest state of the art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modeling behavior was not well-done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar, in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the rain drop and the diameter (mm) of a rain drop: v(D)=4.67 D0.53. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R(Z) is most sensitive to variations in DSD and the estimator R (KDP, Z, ZDR) is the best estimator for estimating the rain rate. The lowest sensitivity of the rain rate estimator R (KDP, Z, ZDP) to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes KDP, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross section, which contributes to Z, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, KDP is less sensitive to DSD variations than Z.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
NASA Astrophysics Data System (ADS)
Grenn, Michael W.
This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.
Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.
2018-05-11
Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.
The art and science of weed mapping
Barnett, David T.; Stohlgren, Thomas J.; Jarnevich, Catherine S.; Chong, Geneva W.; Ericson, Jenny A.; Davern, Tracy R.; Simonson, Sara E.
2007-01-01
Land managers need cost-effective and informative tools for non-native plant species management. Many local, state, and federal agencies adopted mapping systems designed to collect comparable data for the early detection and monitoring of non-native species. We compared mapping information to statistically rigorous, plot-based methods to better understand the benefits and compatibility of the two techniques. Mapping non-native species locations provided a species list, associated species distributions, and infested area for subjectively selected survey sites. The value of this information may be compromised by crude estimates of cover and incomplete or biased estimations of species distributions. Incorporating plot-based assessments guided by a stratified-random sample design provided a less biased description of non-native species distributions and increased the comparability of data over time and across regions for the inventory, monitoring, and management of non-native and native plant species.
Distribution and abundance of snowy plovers in eastern North America, the Caribbean, and the Bahamas
Gorman, Leah; Haig, Susan M.
2002-01-01
Snowy Plovers (Charadrius alexandrinus) are small, partially migrant shorebirds that are broadly distributed across North America. Snowy Plover distribution west of the Rocky Mountains has been well described. However, distribution and abundance east of the Rocky Mountains has not received much attention despite current status and ESA listing concerns for Snowy Plovers in the southeastern United States and the Caribbean. Thus, a first step in developing a monitoring program for Snowy Plovers is to understand the species' distribution. We summarize information on distribution and abundance of Snowy Plovers in the eastern United States, Caribbean, and Bahamas. Breeding and winter distribution maps for the continental United States were generated from a database of 3563 records from 388 sites in continental North America constructed from International Shorebird Survey (ISS), Christmas Bird Count (CBC), unpublished field data, and published accounts. Comparison of maximum counts per site (1980–present) indicated the number of breeding Snowy Plovers was greatest in Kansas and Oklahoma, while the greatest number of wintering birds occurred in the Laguna Madre of Texas and Mexico. Snowy Plovers concentrate at sites in Oklahoma and Texas during migration, with higher concentrations on the upper Texas coast in spring compared to fall migration. Data regarding historic abundance and trends are limited but suggest that Snowy Plovers in the eastern United States may have experienced regional population declines and may have suffered a range contraction in Texas. Serious concerns about the conservation status of Snowy Plovers in the eastern United States, the Caribbean, and the Bahamas indicate an immediate need for systematic surveys and up-to-date population estimates.
Quantum hacking: Saturation attack on practical continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Qin, Hao; Kumar, Rupesh; Alléaume, Romain
2016-07-01
We identify and study a security loophole in continuous-variable quantum key distribution (CVQKD) implementations, related to the imperfect linearity of the homodyne detector. By exploiting this loophole, we propose an active side-channel attack on the Gaussian-modulated coherent-state CVQKD protocol combining an intercept-resend attack with an induced saturation of the homodyne detection on the receiver side (Bob). We show that an attacker can bias the excess noise estimation by displacing the quadratures of the coherent states received by Bob. We propose a saturation model that matches experimental measurements on the homodyne detection and use this model to study the impact of the saturation attack on parameter estimation in CVQKD. We demonstrate that this attack can bias the excess noise estimation beyond the null key threshold for any system parameter, thus leading to a full security break. If we consider an additional criterion imposing that the channel transmission estimation should not be affected by the attack, then the saturation attack can only be launched if the attenuation on the quantum channel is sufficient, corresponding to attenuations larger than approximately 6 dB. We moreover discuss the possible countermeasures against the saturation attack and propose a countermeasure based on Gaussian postselection that can be implemented by classical postprocessing and may allow one to distill the secret key when the raw measurement data are partly saturated.
Sharing out NASA's spoils. [economic benefits of U.S. space program
NASA Technical Reports Server (NTRS)
Bezdek, Roger H.; Wendling, Robert M.
1992-01-01
The economic benefits of NASA programs are discussed. Emphasis is given to an analysis of indirect economic benefits which estimates the effect of NASA programs on employment, personal income, corporate sales and profits, and government tax revenues in the U.S. and in each state. Data are presented that show that NASA programs have widely varying multipliers by industry and that illustrate the distribution of jobs by industry as well as the distribution of sales.
NASA Astrophysics Data System (ADS)
Baral, P.; Haq, M. A.; Mangan, P.
2017-12-01
The impacts of climate change on extent of permafrost degradation in the Himalayas and its effect upon the carbon cycle and ecosystem changes are not well understood due to lack of historical ground-based observations. We have used high resolution optical and satellite radar observations and applied empirical-statistical methods for the estimation of spatial and altitudinal limits of permafrost distribution in North-Western Himalayas. Visual interpretations of morphological characteristics using high resolution optical images have been used for mapping, identification and classification of distinctive geomorphological landforms. Subsequently, we have created a detail inventory of different types of rock glaciers and studied the contribution of topo climatic factors in their occurrence and distribution through Logistic Regression modelling. This model establishes the relationship between presence of permafrost and topo-climatic factors like Mean Annual Air Temperature (MAAT), Potential Incoming Solar Radiation (PISR), altitude, aspect and slope. This relationship has been used to estimate the distributed probability of permafrost occurrence, within a GIS environment. The ability of the model to predict permafrost occurrence has been tested using locations of mapped rock glaciers and the area under the Receiver Operating Characteristic (ROC) curve. Additionally, interferometric properties of Sentinel and ALOS PALSAR datasets are used for the identification and assessment of rock glacier activity in the region.
PisCES: Pis(cine) Community Estimation Software
PisCES predicts a fish community for any NHD-Plus stream reach in the conterminous United States. PisCES utilizes HUC-based distributional information for over 1,000 nature and non-native species obtained from NatureServe, the USGS, and Peterson Field Guide to Freshwater Fishes o...
Current status of Marek's disease in the united states and worldwide
USDA-ARS?s Scientific Manuscript database
A questionnaire was widely distributed in 2011 to estimate the global prevalence of Marek’s disease and gain a better understanding of current control strategies and future concerns. A total of 104 questionnaires were returned representing 108 countries from sources including national branch secret...
State of Technology for Rehabilitation of Water Distribution Systems
The impact that the lack of investment in water infrastructure will have on the performance of aging underground infrastructure over time is well documented and the needed funding estimates range as high as $325 billion over the next 20 years. With the current annual replacement...
DOT National Transportation Integrated Search
1993-01-01
This paper describes the current structure of transportation finance in the Commonwealth. The financial structure is made up of estimated revenues and recommended allocations. We present comparisons of the shares of state and federal transportation r...
DOT National Transportation Integrated Search
2009-03-01
Several LTRC funded studies have : evaluated fatigue damage from heavy : truck loads (GVW 100,000 lb.) on : Louisiana Bridges.(1-4) The final : reports for these studies : recommended a field investigation to : verify the theoretical studies. LTRC : ...
Optimizing Distribution of Pandemic Influenza Antiviral Drugs
Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.
2015-01-01
We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858
López-Collado, José; Isabel López-Arroyo, J; Robles-García, Pedro L; Márquez-Santos, Magdalena
2013-01-01
The Asian citrus psyllid, Diaphorina citri Kuwayama (Hemiptera: Psyllidae), is an introduced pest in Mexico and a vector of huanglongbing, a lethal citrus disease. Estimations of the habitat distribution and population growth rates of D. citri are required to establish regional and areawide management strategies and can be used as a pest risk analysis tools. In this study, the habitat distribution of D. citri in Mexico was computed with MaxEnt, an inductive, machine-learning program that uses bioclimatic layers and point location data. Geographic distributions of development and population growth rates were determined by fitting a temperature-dependent, nonlinear model and projecting the rates over the target area, using the annual mean temperature as the predictor variable. The results showed that the most suitable regions for habitat of D. citri comprise the Gulf of Mexico states, Yucatán Peninsula, and areas scattered throughout the Pacific coastal states. Less suitable areas occurred in northern and central states. The most important predictor variables were related to temperature. Development and growth rates had a distribution wider than habitat, reaching some of the northern states of México. Habitat, development, and population growth rates were correlated to each other and with the citrus producing area. These relationships indicated that citrus producing states are within the most suitable regions for the occurrence, development, and population growth of D. citri, therefore increasing the risk of huanglongbing dispersion.
López-Collado, José; Isabel López-Arroyo, J.; Robles-García, Pedro L.; Márquez-Santos, Magdalena
2013-01-01
The Asian citrus psyllid, Diaphorina citri Kuwayama (Hemiptera: Psyllidae), is an introduced pest in Mexico and a vector of huanglongbing, a lethal citrus disease. Estimations of the habitat distribution and population growth rates of D. citri are required to establish regional and areawide management strategies and can be used as a pest risk analysis tools. In this study, the habitat distribution of D. citri in Mexico was computed with MaxEnt, an inductive, machine-learning program that uses bioclimatic layers and point location data. Geographic distributions of development and population growth rates were determined by fitting a temperature-dependent, nonlinear model and projecting the rates over the target area, using the annual mean temperature as the predictor variable. The results showed that the most suitable regions for habitat of D. citri comprise the Gulf of Mexico states, Yucatán Peninsula, and areas scattered throughout the Pacific coastal states. Less suitable areas occurred in northern and central states. The most important predictor variables were related to temperature. Development and growth rates had a distribution wider than habitat, reaching some of the northern states of México. Habitat, development, and population growth rates were correlated to each other and with the citrus producing area. These relationships indicated that citrus producing states are within the most suitable regions for the occurrence, development, and population growth of D. citri, therefore increasing the risk of huanglongbing dispersion. PMID:24735280
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
Farooqui, Habib; Jit, Mark; Heymann, David L.; Zodpey, Sanjay
2015-01-01
The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3–3.9 million) episodes of severe pneumonia and 0.35 million (0.31–0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49–0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92–119 thousand) pneumococcal deaths occurred in India. The top contributors to India’s pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our results highlight the need to improve access to care and increase coverage and equity of pneumonia preventing vaccines in states with high pneumonia burden. PMID:26086700
Smart darting diffusion Monte Carlo: Applications to lithium ion-Stockmayer clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, H. M.; Jake, L. C.; Curotto, E., E-mail: curotto@arcadia.edu
2016-05-07
In a recent investigation [K. Roberts et al., J. Chem. Phys. 136, 074104 (2012)], we have shown that, for a sufficiently complex potential, the Diffusion Monte Carlo (DMC) random walk can become quasiergodic, and we have introduced smart darting-like moves to improve the sampling. In this article, we systematically characterize the bias that smart darting moves introduce in the estimate of the ground state energy of a bosonic system. We then test a simple approach to eliminate completely such bias from the results. The approach is applied for the determination of the ground state of lithium ion-n–dipoles clusters in themore » n = 8–20 range. For these, the smart darting diffusion Monte Carlo simulations find the same ground state energy and mixed-distribution as the traditional approach for n < 14. In larger systems we find that while the ground state energies agree quantitatively with or without smart darting moves, the mixed-distributions can be significantly different. Some evidence is offered to conclude that introducing smart darting-like moves in traditional DMC simulations may produce a more reliable ground state mixed-distribution.« less
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
State-space modeling of population sizes and trends in Nihoa Finch and Millerbird
Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.
2016-01-01
Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.
Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.
Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Size and distribution of the 1975 striped bass spawning stock in the Potomac River. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zankel, K.L.; Kobler, B.; Haire, M.S.
1976-12-01
In the spring of 1975, an acoustic survey was made of a 40-mile section of the Potomac River. This survey was part of a program designed to estimate the distribution and abundance of spawning striped bass. The total number of striped bass in the 40-mile sector of the Potomac from Mockley Point to Morgantown was estimated to be between 2 and 4.5 million adult fish during spawning in late April. The highest population density was found between Douglas Point and Possum Point. The surveys were part of the Potomac River Fisheries Program and were conducted for the power plant sitingmore » program of the state of Maryland.« less
Locia-Aguilar, G J; López-Saucedo, B; Deheza-Bautista, S; Salado-Beltrán, O V; Martínez-Sevilla, V M; Rangel-Villalobos, H
2018-03-31
Allele distribution and forensic parameters were estimated for 15 STR loci (AmpFlSTR Identifiler kit) in 251 Mexican-Mestizos from the state of Guerrero (South, Mexico). Genotype distribution was in agreement with Hardy-Weinberg expectations for all 15 STRs. Similarly, linkage disequilibrium test demonstrated no association between pair of loci. The power of exclusion and power of discrimination values were 99.999634444% and >99.99999999%, respectively. Genetic relationship analysis regarding Mestizo populations from the main geographic regions of Mexico suggests that the Center and the present South regions conform one population cluster, separated from the Southeast and Northwest regions. Copyright © 2018 Elsevier B.V. All rights reserved.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
Experimental realization of equiangular three-state quantum key distribution
Schiavon, Matteo; Vallone, Giuseppe; Villoresi, Paolo
2016-01-01
Quantum key distribution using three states in equiangular configuration combines a security threshold comparable with the one of the Bennett-Brassard 1984 protocol and a quantum bit error rate (QBER) estimation that does not need to reveal part of the key. We implement an entanglement-based version of the Renes 2004 protocol, using only passive optic elements in a linear scheme for the positive-operator valued measure (POVM), generating an asymptotic secure key rate of more than 10 kbit/s, with a mean QBER of 1.6%. We then demonstrate its security in the case of finite key and evaluate the key rate for both collective and general attacks. PMID:27465643
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Effect of Stress State on Fracture Features
NASA Astrophysics Data System (ADS)
Das, Arpan
2018-02-01
Present article comprehensively explores the influence of specimen thickness on the quantitative estimates of different ductile fractographic features in two dimensions, correlating tensile properties of a reactor pressure vessel steel tested under ambient temperature where the initial crystallographic texture, inclusion content, and their distribution are kept unaltered. It has been investigated that the changes in tensile fracture morphology of these steels are directly attributable to the resulting stress-state history under tension for given specimen dimensions.
Delving into α-stable distribution in noise suppression for seizure detection from scalp EEG
NASA Astrophysics Data System (ADS)
Wang, Yueming; Qi, Yu; Wang, Yiwen; Lei, Zhen; Zheng, Xiaoxiang; Pan, Gang
2016-10-01
Objective. There is serious noise in EEG caused by eye blink and muscle activities. The noise exhibits similar morphologies to epileptic seizure signals, leading to relatively high false alarms in most existing seizure detection methods. The objective in this paper is to develop an effective noise suppression method in seizure detection and explore the reason why it works. Approach. Based on a state-space model containing a non-linear observation function and multiple features as the observations, this paper delves deeply into the effect of the α-stable distribution in the noise suppression for seizure detection from scalp EEG. Compared with the Gaussian distribution, the α-stable distribution is asymmetric and has relatively heavy tails. These properties make it more powerful in modeling impulsive noise in EEG, which usually can not be handled by the Gaussian distribution. Specially, we give a detailed analysis in the state estimation process to show the reason why the α-stable distribution can suppress the impulsive noise. Main results. To justify each component in our model, we compare our method with 4 different models with different settings on a collected 331-hour epileptic EEG data. To show the superiority of our method, we compare it with the existing approaches on both our 331-hour data and 892-hour public data. The results demonstrate that our method is most effective in both the detection rate and the false alarm. Significance. This is the first attempt to incorporate the α-stable distribution to a state-space model for noise suppression in seizure detection and achieves the state-of-the-art performance.
A Conceptual Framework for Measuring R&D Product Impact.
ERIC Educational Resources Information Center
Hull, William L.; And Others
A framework to aid in estimating the impact from educational research and development (R&D) products was developed at the National Center for Research in Vocational Education at the Ohio State University. The dimensions of the framework (product development, distribution, implementation, utilization and effects) are explained in detail. The…
High-Risk Drinking among College Fraternity Members: A National Perspective
ERIC Educational Resources Information Center
Caudill, Barry D.; Crosse, Scott B.; Campbell, Bernadette; Howard, Jan; Luckey, Bill; Blane, Howard T.
2006-01-01
This survey, with its 85% response rate, provides an extensive profile of drinking behaviors and predictors of drinking among 3406 members of one national college fraternity, distributed across 98 chapters in 32 states. Multiple indexes of alcohol consumption measured frequency, quantity, estimated blood alcohol concentration levels (BACs), and…
Improving the Vertical Distribution of Fire Emissions in CMAQ
The area burned by wildland fires (prescribed and wild) across the contiguous United States (U.S.) has expanded by nearly 50% and now averages 2 million hectares per year. Such fires are estimated to cause 8000 deaths per year and are monetized as having a ~$450 billion impact t...
The Assessment of Climatological Impacts on Agricultural Production and Residential Energy Demand
NASA Astrophysics Data System (ADS)
Cooter, Ellen Jean
The assessment of climatological impacts on selected economic activities is presented as a multi-step, inter -disciplinary problem. The assessment process which is addressed explicitly in this report focuses on (1) user identification, (2) direct impact model selection, (3) methodological development, (4) product development and (5) product communication. Two user groups of major economic importance were selected for study; agriculture and gas utilities. The broad agricultural sector is further defined as U.S.A. corn production. The general category of utilities is narrowed to Oklahoma residential gas heating demand. The CERES physiological growth model was selected as the process model for corn production. The statistical analysis for corn production suggests that (1) although this is a statistically complex model, it can yield useful impact information, (2) as a result of output distributional biases, traditional statistical techniques are not adequate analytical tools, (3) the model yield distribution as a whole is probably non-Gausian, particularly in the tails and (4) there appears to be identifiable weekly patterns of forecasted yields throughout the growing season. Agricultural quantities developed include point yield impact estimates and distributional characteristics, geographic corn weather distributions, return period estimates, decision making criteria (confidence limits) and time series of indices. These products were communicated in economic terms through the use of a Bayesian decision example and an econometric model. The NBSLD energy load model was selected to represent residential gas heating consumption. A cursory statistical analysis suggests relationships among weather variables across the Oklahoma study sites. No linear trend in "technology -free" modeled energy demand or input weather variables which would correspond to that contained in observed state -level residential energy use was detected. It is suggested that this trend is largely the result of non-weather factors such as population and home usage patterns rather than regional climate change. Year-to-year changes in modeled residential heating demand on the order of 10('6) Btu's per household were determined and later related to state -level components of the Oklahoma economy. Products developed include the definition of regional forecast areas, likelihood estimates of extreme seasonal conditions and an energy/climate index. This information is communicated in economic terms through an input/output model which is used to estimate changes in Gross State Product and Household income attributable to weather variability.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.
2013-01-01
Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207
The Effect of the Underlying Distribution in Hurst Exponent Estimation
Sánchez, Miguel Ángel; Trinidad, Juan E.; García, José; Fernández, Manuel
2015-01-01
In this paper, a heavy-tailed distribution approach is considered in order to explore the behavior of actual financial time series. We show that this kind of distribution allows to properly fit the empirical distribution of the stocks from S&P500 index. In addition to that, we explain in detail why the underlying distribution of the random process under study should be taken into account before using its self-similarity exponent as a reliable tool to state whether that financial series displays long-range dependence or not. Finally, we show that, under this model, no stocks from S&P500 index show persistent memory, whereas some of them do present anti-persistent memory and most of them present no memory at all. PMID:26020942
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Cost of Crashes Related to Road Conditions, United States, 2006
Zaloshnja, Eduard; Miller, Ted R.
2009-01-01
This is the first study to estimate the cost of crashes related to road conditions in the U.S. To model the probability that road conditions contributed to the involvement of a vehicle in the crash, we used 2000–03 Large Truck Crash Causation Study (LTCCS) data, the only dataset that provides detailed information whether road conditions contributed to crash occurrence. We applied the logistic regression results to a costed national crash dataset in order to calculate the probability that road conditions contributed to the involvement of a vehicle in each crash. In crashes where someone was moderately to seriously injured (AIS-2-6) in a vehicle that harmfully impacted a large tree or medium or large non-breakaway pole, or if the first harmful event was collision with a bridge, we changed the calculated probability of being road-related to 1. We used the state distribution of costs of fatal crashes where road conditions contributed to crash occurrence or severity to estimate the respective state distribution of non-fatal crash costs. The estimated comprehensive cost of traffic crashes where road conditions contributed to crash occurrence or severity was $217.5 billion in 2006. This represented 43.6% of the total comprehensive crash cost. The large share of crash costs related to road design and conditions underlines the importance of these factors in highway safety. Road conditions are largely controllable. Road maintenance and upgrading can prevent crashes and reduce injury severity. PMID:20184840
A practical method to detect the freezing/thawing onsets of seasonal frozen ground in Alaska
NASA Astrophysics Data System (ADS)
Chen, Xiyu; Liu, Lin
2017-04-01
Microwave remote sensing can provide useful information about freeze/thaw state of soil at the Earth surface. An edge detection method is applied in this study to estimate the onsets of soil freeze/thaw state transition using L band space-borne radiometer data. The Soil Moisture Active Passive (SMAP) mission has a L band radiometer and can provide daily brightness temperature (TB) with horizontal/vertical polarizations. We use the normalized polarization ratios (NPR) calculated based on the Level-1C TB product of SMAP (spatial resolution: 36 km) as the indicator for soil freeze/thaw state, to estimate the freezing and thawing onsets in Alaska in the year of 2015 and 2016. NPR is calculated based on the difference between TB at vertical and horizontal polarizations. Therefore, it is strongly sensitive to liquid water content change in the soil and independent with the soil temperature. Onset estimation is based on the detection of abrupt changes of NPR in transition seasons using edge detection method, and the validation is to compare estimated onsets with the onsets derived from in situ measurement. According to the comparison, the estimated onsets were generally 15 days earlier than the measured onsets in 2015. However, in 2016 there were 4 days in average for the estimation earlier than the measured, which may be due to the less snow cover. Moreover, we extended our estimation to the entire state of Alaska. The estimated freeze/thaw onsets showed a reasonable latitude-dependent distribution although there are still some outliers caused by the noisy variation of NPR. At last, we also try to remove these outliers and improve the performance of the method by smoothing the NPR time series.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087
Lamb, Brian K; Edburg, Steven L; Ferrara, Thomas W; Howard, Touché; Harrison, Matthew R; Kolb, Charles E; Townsend-Small, Amy; Dyck, Wesley; Possolo, Antonio; Whetstone, James R
2015-04-21
Fugitive losses from natural gas distribution systems are a significant source of anthropogenic methane. Here, we report on a national sampling program to measure methane emissions from 13 urban distribution systems across the U.S. Emission factors were derived from direct measurements at 230 underground pipeline leaks and 229 metering and regulating facilities using stratified random sampling. When these new emission factors are combined with estimates for customer meters, maintenance, and upsets, and current pipeline miles and numbers of facilities, the total estimate is 393 Gg/yr with a 95% upper confidence limit of 854 Gg/yr (0.10% to 0.22% of the methane delivered nationwide). This fraction includes emissions from city gates to the customer meter, but does not include other urban sources or those downstream of customer meters. The upper confidence limit accounts for the skewed distribution of measurements, where a few large emitters accounted for most of the emissions. This emission estimate is 36% to 70% less than the 2011 EPA inventory, (based largely on 1990s emission data), and reflects significant upgrades at metering and regulating stations, improvements in leak detection and maintenance activities, as well as potential effects from differences in methodologies between the two studies.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Conformational Transitions and Convergence of Absolute Binding Free Energy Calculations
Lapelosa, Mauro; Gallicchio, Emilio; Levy, Ronald M.
2011-01-01
The Binding Energy Distribution Analysis Method (BEDAM) is employed to compute the standard binding free energies of a series of ligands to a FK506 binding protein (FKBP12) with implicit solvation. Binding free energy estimates are in reasonably good agreement with experimental affinities. The conformations of the complexes identified by the simulations are in good agreement with crystallographic data, which was not used to restrain ligand orientations. The BEDAM method is based on λ -hopping Hamiltonian parallel Replica Exchange (HREM) molecular dynamics conformational sampling, the OPLS-AA/AGBNP2 effective potential, and multi-state free energy estimators (MBAR). Achieving converged and accurate results depends on all of these elements of the calculation. Convergence of the binding free energy is tied to the level of convergence of binding energy distributions at critical intermediate states where bound and unbound states are at equilibrium, and where the rate of binding/unbinding conformational transitions is maximal. This finding mirrors similar observations in the context of order/disorder transitions as for example in protein folding. Insights concerning the physical mechanism of ligand binding and unbinding are obtained. Convergence for the largest FK506 ligand is achieved only after imposing strict conformational restraints, which however require accurate prior structural knowledge of the structure of the complex. The analytical AGBNP2 model is found to underestimate the magnitude of the hydrophobic driving force towards binding in these systems characterized by loosely packed protein-ligand binding interfaces. Rescoring of the binding energies using a numerical surface area model corrects this deficiency. This study illustrates the complex interplay between energy models, exploration of conformational space, and free energy estimators needed to obtain robust estimates from binding free energy calculations. PMID:22368530
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Quantum key distribution with finite resources: Secret key rates via Renyi entropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abruzzo, Silvestre; Kampermann, Hermann; Mertz, Markus
A realistic quantum key distribution (QKD) protocol necessarily deals with finite resources, such as the number of signals exchanged by the two parties. We derive a bound on the secret key rate which is expressed as an optimization problem over Renyi entropies. Under the assumption of collective attacks by an eavesdropper, a computable estimate of our bound for the six-state protocol is provided. This bound leads to improved key rates in comparison to previous results.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Long-term statistics of extreme tsunami height at Crescent City
NASA Astrophysics Data System (ADS)
Dong, Sheng; Zhai, Jinjin; Tao, Shanshan
2017-06-01
Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.
Statewide Groundwater Recharge Modeling in New Mexico
NASA Astrophysics Data System (ADS)
Xu, F.; Cadol, D.; Newton, B. T.; Phillips, F. M.
2017-12-01
It is crucial to understand the rate and distribution of groundwater recharge in New Mexico because it not only largely defines a limit for water availability in this semi-arid state, but also is the least understood aspect of the state's water budget. With the goal of estimating groundwater recharge statewide, we are developing the Evapotranspiration and Recharge Model (ETRM), which uses existing spatial datasets to model the daily soil water balance over the state at a resolution of 250 m cell. The input datasets includes PRISM precipitation data, MODIS Normalized Difference Vegetation Index (NDVI), NRCS soils data, state geology data and reference ET estimates produced by Gridded Atmospheric Data downscalinG and Evapotranspiration Tools (GADGET). The current estimated recharge presents diffuse recharge only, not focused recharge as in channels or playas. Direct recharge measurements are challenging and rare, therefore we estimate diffuse recharge using a water balance approach. The ETRM simulated runoff amount was compared with USGS gauged discharge in four selected ephemeral channels: Mogollon Creek, Zuni River, the Rio Puerco above Bernardo, and the Rio Puerco above Arroyo Chico. Result showed that focused recharge is important, and basin characteristics can be linked with watershed hydrological response. As the sparse instruments in NM provide limited help in improving estimation of focused recharge by linking basin characteristics, the Walnut Gulch Experimental Watershed, which is one of the most densely gauged and monitored semiarid rangeland watershed for hydrology research purpose, is now being modeled with ETRM. Higher spatial resolution of field data is expected to enable detailed comparison of model recharge results with measured transmission losses in ephemeral channels. The final ETRM product will establish an algorithm to estimate the groundwater recharge as a water budget component of the entire state of New Mexico. Reference ET estimated by GADGET suggests 10% - 22% increase by the end of this century under IPCC AR4 A2 emission scenario. ETRM will help water planning for the state to face drought brought by the climate change.
Blom, Philip Stephen; Marcillo, Omar Eduardo
2016-12-05
A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less
Fan-out Estimation in Spin-based Quantum Computer Scale-up.
Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R
2017-10-17
Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.
NASA Astrophysics Data System (ADS)
Richardson, Robert R.; Zhao, Shi; Howey, David A.
2016-09-01
Estimating the temperature distribution within Li-ion batteries during operation is critical for safety and control purposes. Although existing control-oriented thermal models - such as thermal equivalent circuits (TEC) - are computationally efficient, they only predict average temperatures, and are unable to predict the spatially resolved temperature distribution throughout the cell. We present a low-order 2D thermal model of a cylindrical battery based on a Chebyshev spectral-Galerkin (SG) method, capable of predicting the full temperature distribution with a similar efficiency to a TEC. The model accounts for transient heat generation, anisotropic heat conduction, and non-homogeneous convection boundary conditions. The accuracy of the model is validated through comparison with finite element simulations, which show that the 2-D temperature field (r, z) of a large format (64 mm diameter) cell can be accurately modelled with as few as 4 states. Furthermore, the performance of the model for a range of Biot numbers is investigated via frequency analysis. For larger cells or highly transient thermal dynamics, the model order can be increased for improved accuracy. The incorporation of this model in a state estimation scheme with experimental validation against thermocouple measurements is presented in the companion contribution (http://www.sciencedirect.com/science/article/pii/S0378775316308163)
Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello
2018-04-22
A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.
A comparison of recharge rates in aquifers of the United States based on groundwater-age data
McMahon, P.B.; Plummer, Niel; Böhlke, J.K.; Shapiro, S.D.; Hinkle, S.R.
2011-01-01
An overview is presented of existing groundwater-age data and their implications for assessing rates and timescales of recharge in selected unconfined aquifer systems of the United States. Apparent age distributions in aquifers determined from chlorofluorocarbon, sulfur hexafluoride, tritium/helium-3, and radiocarbon measurements from 565 wells in 45 networks were used to calculate groundwater recharge rates. Timescales of recharge were defined by 1,873 distributed tritium measurements and 102 radiocarbon measurements from 27 well networks. Recharge rates ranged from < 10 to 1,200 mm/yr in selected aquifers on the basis of measured vertical age distributions and assuming exponential age gradients. On a regional basis, recharge rates based on tracers of young groundwater exhibited a significant inverse correlation with mean annual air temperature and a significant positive correlation with mean annual precipitation. Comparison of recharge derived from groundwater ages with recharge derived from stream base-flow evaluation showed similar overall patterns but substantial local differences. Results from this compilation demonstrate that age-based recharge estimates can provide useful insights into spatial and temporal variability in recharge at a national scale and factors controlling that variability. Local age-based recharge estimates provide empirical data and process information that are needed for testing and improving more spatially complete model-based methods.
Real-time sensor validation and fusion for distributed autonomous sensors
NASA Astrophysics Data System (ADS)
Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.
2004-04-01
Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.
The estimated lifetime probability of acquiring human papillomavirus in the United States.
Chesson, Harrell W; Dunne, Eileen F; Hariri, Susan; Markowitz, Lauri E
2014-11-01
Estimates of the lifetime probability of acquiring human papillomavirus (HPV) can help to quantify HPV incidence, illustrate how common HPV infection is, and highlight the importance of HPV vaccination. We developed a simple model, based primarily on the distribution of lifetime numbers of sex partners across the population and the per-partnership probability of acquiring HPV, to estimate the lifetime probability of acquiring HPV in the United States in the time frame before HPV vaccine availability. We estimated the average lifetime probability of acquiring HPV among those with at least 1 opposite sex partner to be 84.6% (range, 53.6%-95.0%) for women and 91.3% (range, 69.5%-97.7%) for men. Under base case assumptions, more than 80% of women and men acquire HPV by age 45 years. Our results are consistent with estimates in the existing literature suggesting a high lifetime probability of HPV acquisition and are supported by cohort studies showing high cumulative HPV incidence over a relatively short period, such as 3 to 5 years.
NASA Astrophysics Data System (ADS)
Terada, T.; Sato, M.; Mochizuki, N.; Yamamoto, Y.; Tsunakawa, H.
2013-12-01
Magnetic properties of ferromagnetic minerals generally depend on their chemical composition, crystal structure, size, and shape. In the usual paleomagnetic study, we use a bulk sample which is the assemblage of magnetic minerals showing broad distributions of various magnetic properties. Microscopic and Curie-point observations of the bulk sample enable us to identify the constituent magnetic minerals, while other measurements, for example, stepwise thermal and/or alternating field demagnetizations (ThD, AFD) make it possible to estimate size, shape and domain state of the constituent magnetic grains. However, estimation based on stepwise demagnetizations has a limitation that magnetic grains with the same coercivity Hc (or blocking temperature Tb) can be identified as the single population even though they could have different size and shape. Dunlop and West (1969) carried out mapping of grain size and coercivity (Hc) using pTRM. However, it is considered that their mapping method is basically applicable to natural rocks containing only SD grains, since the grain sizes are estimated on the basis of the single domain theory (Neel, 1949). In addition, it is impossible to check thermal alteration due to laboratory heating in their experiment. In the present study we propose a new experimental method which makes it possible to estimate distribution of size and shape of magnetic minerals in a bulk sample. The present method is composed of simple procedures: (1) imparting ARM to a bulk sample, (2) ThD at a certain temperature, (3) stepwise AFD on the remaining ARM, (4) repeating the steps (1) ~ (3) with ThD at elevating temperatures up to the Curie temperature of the sample. After completion of the whole procedures, ARM spectra are calculated and mapped on the HC-Tb plane (hereafter called HC-Tb diagram). We analyze the Hc-Tb diagrams as follows: (1) For uniaxial SD populations, theoretical curve for a certain grain size (or shape anisotropy) is drawn on the Hc-Tb diagram. The curves are calculated using the single domain theory, since coercivity and blocking temperature of uniaxial SD grains can be expressed as a function of size and shape. (2) Boundary between SD and MD grains are calculated and drawn on the Hc-Tb diagram according to the theory by Butler and Banerjee (1975). (3) Theoretical predictions by (1) and (2) are compared with the obtained ARM spectra to estimate quantitive distribution of size, shape and domain state of magnetic grains in the sample. This mapping method has been applied to three samples: Hawaiian basaltic lava extruded in 1995, Ueno basaltic lava formed during Matsuyama chron, and Oshima basaltic lava extruded in 1986. We will discuss physical states of magnetic grains (size, shape, domain state, etc.) and their possible origins.
A finite state projection algorithm for the stationary solution of the chemical master equation.
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-21
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.
A finite state projection algorithm for the stationary solution of the chemical master equation
NASA Astrophysics Data System (ADS)
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-01
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.
NASA Astrophysics Data System (ADS)
Yang, Shaw-Yang; Yeh, Hund-Der; Li, Kuang-Yi
2010-10-01
Heat storage systems are usually used to store waste heat and solar energy. In this study, a mathematical model is developed to predict both the steady-state and transient temperature distributions of an aquifer thermal energy storage (ATES) system after hot water is injected through a well into a confined aquifer. The ATES has a confined aquifer bounded by aquicludes with different thermomechanical properties and geothermal gradients along the depth. Consider that the heat is transferred by conduction and forced convection within the aquifer and by conduction within the aquicludes. The dimensionless semi-analytical solutions of temperature distributions of the ATES system are developed using Laplace and Fourier transforms and their corresponding time-domain results are evaluated numerically by the modified Crump method. The steady-state solution is obtained from the transient solution through the final-value theorem. The effect of the heat transfer coefficient on aquiclude temperature distribution is appreciable only near the outer boundaries of the aquicludes. The present solutions are useful for estimating the temperature distribution of heat injection and the aquifer thermal capacity of ATES systems.
An introduction of component fusion extend Kalman filtering method
NASA Astrophysics Data System (ADS)
Geng, Yue; Lei, Xusheng
2018-05-01
In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.
Stochastic modeling of central apnea events in preterm infants.
Clark, Matthew T; Delos, John B; Lake, Douglas E; Lee, Hoshik; Fairchild, Karen D; Kattwinkel, John; Moorman, J Randall
2016-04-01
A near-ubiquitous pathology in very low birth weight infants is neonatal apnea, breathing pauses with slowing of the heart and falling blood oxygen. Events of substantial duration occasionally occur after an infant is discharged from the neonatal intensive care unit (NICU). It is not known whether apneas result from a predictable process or from a stochastic process, but the observation that they occur in seemingly random clusters justifies the use of stochastic models. We use a hidden-Markov model to analyze the distribution of durations of apneas and the distribution of times between apneas. The model suggests the presence of four breathing states, ranging from very stable (with an average lifetime of 12 h) to very unstable (with an average lifetime of 10 s). Although the states themselves are not visible, the mathematical analysis gives estimates of the transition rates among these states. We have obtained these transition rates, and shown how they change with post-menstrual age; as expected, the residence time in the more stable breathing states increases with age. We also extrapolated the model to predict the frequency of very prolonged apnea during the first year of life. This paradigm-stochastic modeling of cardiorespiratory control in neonatal infants to estimate risk for severe clinical events-may be a first step toward personalized risk assessment for life threatening apnea events after NICU discharge.
Improving Hospital Reporting of Patient Race and Ethnicity--Approaches to Data Auditing.
Zingmond, David S; Parikh, Punam; Louie, Rachel; Lichtensztajn, Daphne Y; Ponce, Ninez; Hasnain-Wynia, Romana; Gomez, Scarlett Lin
2015-08-01
To investigate new metrics to improve the reporting of patient race and ethnicity (R/E) by hospitals. California Patient Discharge Database (PDD) and birth registry, 2008-2009, Healthcare and Cost Utilization Project's State Inpatient Database, 2008-2011, cancer registry 2000-2008, and 2010 US Census Summary File 2. We examined agreement between hospital reported R/E versus self-report among mothers delivering babies and a cancer cohort in California. Metrics were created to measure root mean squared differences (RMSD) by hospital between reported R/E distribution and R/E estimates using R/E distribution within each patient's zip code of residence. RMSD comparisons were made to corresponding "gold standard" facility-level measures within the maternal cohort for California and six comparison states. Maternal birth hospitalization (linked to the state birth registry) and cancer cohort records linked to preceding and subsequent hospitalizations. Hospital discharges were linked to the corresponding Census zip code tabulation area using patient zip code. Overall agreement between the PDD and the gold standard for the maternal cohort was 86 percent for the combined R/E measure and 71 percent for race alone. The RMSD measure is modestly correlated with the summary level gold standard measure for R/E (r = 0.44). The RMSD metric revealed general improvement in data agreement and completeness across states. "Other" and "unknown" categories were inconsistently applied within inpatient databases. Comparison between reported R/E and R/E estimates using zip code level data may be a reasonable first approach to evaluate and track hospital R/E reporting. Further work should focus on using more granular geocoded data for estimates and tracking data to improve hospital collection of R/E data. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Nguyen, A. T.; Heimbach, P.; Garg, V.; Ocana, V.
2016-12-01
Over the last few decades, various agencies have invested heavily in the development and deployment of Arctic ocean and sea ice observing platforms, especially moorings, profilers, gliders, and satellite-based instruments. These observational assets are heterogeneous in terms of variables sampled and spatio-temporal coverage, which calls for a dynamical synthesis framework of the diverse data streams. Here we introduce an adjoint-based Arctic Subpolar gyre sTate estimate (ASTE), a medium resolution model-data synthesis that leverages all the possible observational assets. Through an established formal state and parameter estimation framework, the ASTE framework produces a 2002-present ocean-sea ice state that can be used to address Arctic System science questions. It is dynamically and kinematically consistent with known equations of motion and consistent with observations. Four key aspects of ASTE will be discussed: (1) How well is ASTE constrained by the existing observations; (2) which data most effectively constrain the system, and what impact on the solution does spatial and temporal coverage have; (3) how much information does one set of observation (e.g. Fram Strait heat transport) carry about a remote, but dynamically linked component (e.g. heat content in the Beaufort Gyre); and (4) how can the framework be used to assess the value of hypothetical observations in constraining poorly observed parts of the Arctic Ocean and the implied mechanisms responsible for the changes occurring in the Arctic. We will discuss the suggested geographic distribution of new observations to maximize the impact on improving our understanding of the general circulation, water mass distribution and hydrographic changes in the Arctic.
Minimum Copies of Schrödinger’s Cat State in the Multi-Photon System
Lu, Yiping; Zhao, Qing
2016-01-01
Multi-photon entanglement has been successfully studied by many theoretical and experimental groups. However, as the number of entangled photons increases, some problems are encountered, such as the exponential increase of time necessary to prepare the same number of copies of entangled states in experiment. In this paper, a new scheme is proposed based on the Lagrange multiplier and Feedback, which cuts down the required number of copies of Schrödinger’s Cat state in multi-photon experiment, which is realized with some noise in actual measurements, and still keeps the standard deviation in the error of fidelity unchanged. It reduces about five percent of the measuring time of eight-photon Schrödinger’s Cat state compared with the scheme used in the usual planning of actual measurements, and moreover it guarantees the same low error in fidelity. In addition, we also applied the same approach to the simulation of ten-photon entanglement, and we found that it reduces in priciple about twenty two percent of the required copies of Schrödinger’s Cat state compared with the conventionally used scheme of the uniform distribution; yet the distribution of optimized copies of the ten-photon Schrödinger’s Cat state gives better fidelity estimation than the uniform distribution for the same number of copies of the ten-photon Schrödinger’s Cat state. PMID:27576585
Importance-sampling computation of statistical properties of coupled oscillators
NASA Astrophysics Data System (ADS)
Gupta, Shamik; Leitão, Jorge C.; Altmann, Eduardo G.
2017-07-01
We introduce and implement an importance-sampling Monte Carlo algorithm to study systems of globally coupled oscillators. Our computational method efficiently obtains estimates of the tails of the distribution of various measures of dynamical trajectories corresponding to states occurring with (exponentially) small probabilities. We demonstrate the general validity of our results by applying the method to two contrasting cases: the driven-dissipative Kuramoto model, a paradigm in the study of spontaneous synchronization; and the conservative Hamiltonian mean-field model, a prototypical system of long-range interactions. We present results for the distribution of the finite-time Lyapunov exponent and a time-averaged order parameter. Among other features, our results show most notably that the distributions exhibit a vanishing standard deviation but a skewness that is increasing in magnitude with the number of oscillators, implying that nontrivial asymmetries and states yielding rare or atypical values of the observables persist even for a large number of oscillators.
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Allmendinger, M.; Gnann, S.; Heisserer, T.; Bárdossy, A.
2017-12-01
The basic problem of geostatistics is to estimate the primary variable (e.g. groundwater quality, nitrate) at an un-sampled location based on point measurements at locations in the vicinity. Typically, models are being used that describe the spatial dependence based on the geometry of the observation network. This presentation demonstrates methods that take the following properties additionally into account: the statistical distribution of the measurements, a different degree of dependence in different quantiles, censored measurements, the composition of categorical additional information in the neighbourhood (exhaustive secondary information), and the spatial dependence of a dependent secondary variable, possibly measured with a different observation network (non-exhaustive secondary data). Two modelling approaches are demonstrated individually and combined: The non-stationarity in the marginal distribution is accounted for by locally mixed distribution functions that depend on the composition of the categorical variable in the neighbourhood of each interpolation location. This methodology is currently being implemented for operational use at the environmental state agency of Baden-Württemberg. An alternative to co-Kriging in copula space with an arbitrary number of secondary parameters is presented: The method performs better than traditional techniques if the primary variable is undersampled and does not produce erroneous negative estimates. Even more, the quality of the uncertainty estimates is much improved. The worth of the secondary information is thoroughly evaluated. The improved geostatistical hydrogeological models are being analyzed using measurements of a large observation network ( 2500 measurement locations) in the state of Baden-Württemberg ( 36.000 km2). Typical groundwater quality parameters such as nitrate, chloride, barium, antrazine, and desethylatrazine are being assessed, cross-validated, and compared with traditional geostatistical methods. The secondary information of land use is available on a 30m x 30m raster. We show that the presented methods are not only better estimators (e.g. in the sense of an average quadratic error), but exhibit a much more realistic structure of the uncertainty and hence are improvements compared to existing methods.
Kerr, William C; Greenfield, Thomas K
2007-10-01
To validate improved survey estimates of alcohol volume and new expenditures questions, these measures were aggregated and evaluated through comparison to sales data. Using the new measures, we examined their distributions by estimating the proportion of mean intake, heavy drinking days, and alcohol expenditures among drinkers grouped by volume. The 2000 National Alcohol Survey is a random digit dialed telephone survey of the United States with 7,612 respondents including 323 who were recontacted for drink ethanol measurement. Among drinkers, we utilized improved drink ethanol content estimates and beverage-specific graduated frequency measures to assess alcohol consumption and past month beverage-specific spending reports to estimate expenditures. Coverage of alcohol sales by the new measures was estimated to be 52.3% for consumption and 59.3% for expenditures. Coverage was best for wine at 92.1% of sales, but improved most for spirits from 37.2% to 55.2%, when empirical drink ethanol content was applied. Distribution estimates showed that the top 10% of drinkers drank 55.3% of the total alcohol consumed, accounted for 61.6% of all 5+ and nearly 80% of all 12+ drinking days. Spirits consumption was the most concentrated with the top decile consuming 62.9% of the total for this beverage. This decile accounted for 33% of total expenditures, even though its mean expenditure per drink was considerably lower ($0.79) than the bottom 50% of drinkers ($4.75). The distributions of mean alcohol intake and heavy drinking days are highly concentrated in the U.S. population. Lower expenditures per drink by the heaviest drinkers suggest substantial downward quality substitution, drinking in cheaper contexts or other bargain pricing strategies. Empirical drink ethanol estimates improved survey coverage of sales particularly for spirits, but significant under-coverage remains, highlighting need for further self-report measurement improvement.
Counting Raindrops and the Distribution of Intervals Between Them.
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.
2017-12-01
Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
NASA Technical Reports Server (NTRS)
Brown, Molly E.; Macauley, Molly
2012-01-01
Climate policy in the United States is currently guided by public-private partnerships and actions at the local and state levels. This mitigation strategy is made up of programs that focus on energy efficiency, renewable energy, agricultural practices and implementation of technologies to reduce greenhouse gases. How will policy makers know if these strategies are working, particularly at the scales at which they are being implemented? The NASA Carbon Monitoring System (CMS) will provide information on carbon dioxide fluxes derived from observations of earth's land, ocean and atmosphere used in state of the art models describing their interactions. This new modeling system could be used to assess the impact of specific policy interventions on CO2 reductions, enabling an iterative, results-oriented policy process. In January of 2012, the CMS team held a meeting with carbon policy and decision makers in Washington DC to describe the developing modeling system to policy makers. The NASA CMS will develop pilot studies to provide information across a range of spatial scales, consider carbon storage in biomass, and improve measures of the atmospheric distribution of carbon dioxide. The pilot involves multiple institutions (four NASA centers as well as several universities) and over 20 scientists in its work. This pilot study will generate CO2 flux maps for two years using observational constraints in NASA's state-of -the-art models. Bottom-up surface flux estimates will be computed using data-constrained land and ocean models; comparison of the different techniques will provide some knowledge of uncertainty in these estimates. Ensembles of atmospheric carbon distributions will be computed using an atmospheric general circulation model (GEOS-5), with perturbations to the surface fluxes and to transport. Top-down flux estimates will be computed from observed atmospheric CO2 distributions (ACOS/GOSAT retrievals) alongside the forward-model fields, in conjunction with an inverse approach based on the CO2 model of GEOS ]Chem. The forward model ensembles will be used to build understanding of relationships among surface flux perturbations, transport uncertainty and atmospheric carbon concentration. This will help construct uncertainty estimates and information on the true spatial resolution of the top-down flux calculations. The relationship between the top-down and bottom-up flux distributions will be documented. Because the goal of NASA CMS is to be policy relevant, the scientists involved in the flux modeling pilot need to understand and be focused on the needs of the climate policy and decision making community. If policy makers are to use CMS products, they must be aware of the modeling effort and begin to design policies that can be evaluated with information. Improving estimates of carbon sequestered in forests, for example, will require information on the spatial variability of forest biomass that is far more explicit than is presently possible using only ground observations. Carbon mitigation policies being implemented by cities around the United States could be designed with the CMS data in mind, enabling sequential evaluation and subsequent improvements in incentives, structures and programs. The success of climate mitigation programs being implemented in the United States today will hang on the depth of the relationship between scientists and their policy and decision making counterparts. Ensuring that there is two-way communication between data providers and users is important for the success both of the policies and the scientific products meant to support them..
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.
Methods for Estimating Water Withdrawals for Mining in the United States, 2005
Lovelace, John K.
2009-01-01
The mining water-use category includes groundwater and surface water that is withdrawn and used for nonfuels and fuels mining. Nonfuels mining includes the extraction of ores, stone, sand, and gravel. Fuels mining includes the extraction of coal, petroleum, and natural gas. Water is used for mineral extraction, quarrying, milling, and other operations directly associated with mining activities. For petroleum and natural gas extraction, water often is injected for secondary oil or gas recovery. Estimates of water withdrawals for mining are needed for water planning and management. This report documents methods used to estimate withdrawals of fresh and saline groundwater and surface water for mining during 2005 for each county and county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands. Fresh and saline groundwater and surface-water withdrawals during 2005 for nonfuels- and coal-mining operations in each county or county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands were estimated. Fresh and saline groundwater withdrawals for oil and gas operations in counties of six states also were estimated. Water withdrawals for nonfuels and coal mining were estimated by using mine-production data and water-use coefficients. Production data for nonfuels mining included the mine location and weight (in metric tons) of crude ore, rock, or mineral produced at each mine in the United States, Puerto Rico, and the U.S. Virgin Islands during 2004. Production data for coal mining included the weight, in metric tons, of coal produced in each county or county equivalent during 2004. Water-use coefficients for mined commodities were compiled from various sources including published reports and written communications from U.S. Geological Survey National Water-use Information Program (NWUIP) personnel in several states. Water withdrawals for oil and gas extraction were estimated for six States including California, Colorado, Louisiana, New Mexico, Texas, and Wyoming, by using data from State agencies that regulate oil and gas extraction. Total water withdrawals for mining in a county were estimated by summing estimated water withdrawals for nonfuels mining, coal mining, and oil and gas extraction. The results of this study were distributed to NWUIP personnel in each State during 2007. NWUIP personnel were required to submit estimated withdrawals for numerous categories of use in their States to a national compilation team for inclusion in a national report describing water use in the United States during 2005. NWUIP personnel had the option of submitting the estimates determined by using the methods described in this report, a modified version of these estimates, or their own set of estimates or reported data. Estimated withdrawals resulting from the methods described in this report may not be included in the national report; therefore the estimates are not presented herein in order to avoid potential inconsistencies with the national report. Water-use coefficients for specific minerals also are not presented to avoid potential disclosure of confidential production data provided by mining operations to the U.S. Geological Survey.
State Tracking and Fault Diagnosis for Dynamic Systems Using Labeled Uncertainty Graph.
Zhou, Gan; Feng, Wenquan; Zhao, Qi; Zhao, Hongbo
2015-11-05
Cyber-physical systems such as autonomous spacecraft, power plants and automotive systems become more vulnerable to unanticipated failures as their complexity increases. Accurate tracking of system dynamics and fault diagnosis are essential. This paper presents an efficient state estimation method for dynamic systems modeled as concurrent probabilistic automata. First, the Labeled Uncertainty Graph (LUG) method in the planning domain is introduced to describe the state tracking and fault diagnosis processes. Because the system model is probabilistic, the Monte Carlo technique is employed to sample the probability distribution of belief states. In addition, to address the sample impoverishment problem, an innovative look-ahead technique is proposed to recursively generate most likely belief states without exhaustively checking all possible successor modes. The overall algorithms incorporate two major steps: a roll-forward process that estimates system state and identifies faults, and a roll-backward process that analyzes possible system trajectories once the faults have been detected. We demonstrate the effectiveness of this approach by applying it to a real world domain: the power supply control unit of a spacecraft.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
ERIC Educational Resources Information Center
Kummerer, Sharon Elizabeth
2010-01-01
The American Speech-Language-Hearing Association (1996) estimated that 10% of the United States population has a disorder of speech, language, or hearing, with proportional distribution among members of racially and ethnically diverse groups. Individuals of Hispanic origin are the fastest-growing minority group in the country. Current national…
Flexural Properties of Eastern Hardwood Pallet Parts
John A. McLeod; Marshall S. White; Paul A. Ifju; Philip A. Araman
1991-01-01
Accurate estimates of the flexural properties of pallet parts are critical to the safe, yet efficient, design of wood pallets. To develop more accurate data for hardwood pallet parts, 840 stringers and 2,520 deckboards, representing 14 hardwood species, were sampled from 35 mills distributed throughout the Eastern United States. The parts were sorted by species,...
ERIC Educational Resources Information Center
Graves, Jennifer
2011-01-01
Using detailed longitudinal data for the state of California, this paper estimates the effect of year-round school calendars on nationally standardized test performance of traditionally disadvantaged students. The student subgroups studied in this paper are: low socioeconomic status, limited English proficiency, Hispanic and Latino, and African…
Neutron Imaging of Lithium Concentration for Validation of Li-Ion Battery State of Charge Estimation
2010-12-01
2008: Understanding liq- uid water distribution and removal phenomena in an op- erating pemfc via neutron radiography. Journal of The...2008: Measurement of liq- uid water accumulation in a pemfc with dead-ended an- ode. Journal of The Electrochemical Society, 155 (11), B1168–B1178
USDA-ARS?s Scientific Manuscript database
The soybean cyst nematode (SCN), Heterodera glycines Ichinohe, is distributed throughout the soybean (Glycine max [L.] Merr.) production areas of the United States and Canada. SCN remains the most economically important pathogen of soybean in North America; the most recent estimate of soybean yield...
Current status of Marek’s disease in the United States & worldwide based on a questionnaire survey
USDA-ARS?s Scientific Manuscript database
A questionnaire was widely distributed in 2011 to estimate the global prevalence of Marek’s disease (MD) and gain a better understanding of current control strategies and future concerns. A total of 112 questionnaires were returned representing 116 countries from sources including national branch s...
Selecting a sampling method to aid in vegetation management decisions in loblolly pine plantations
David R. Weise; Glenn R. Glover
1993-01-01
Objective methods to evaluate hardwood competition in young loblolly pine (Pinustaeda L.) plantations are not widely used in the southeastern United States. Ability of common sampling rules to accurately estimate hardwood rootstock attributes at low sampling intensities and across varying rootstock spatial distributions is unknown. Fixed area plot...
Terrestrial gross carbon dioxide uptake: global distribution and covariation with climate.
Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Arain, M Altaf; Baldocchi, Dennis; Bonan, Gordon B; Bondeau, Alberte; Cescatti, Alessandro; Lasslop, Gitta; Lindroth, Anders; Lomas, Mark; Luyssaert, Sebastiaan; Margolis, Hank; Oleson, Keith W; Roupsard, Olivier; Veenendaal, Elmar; Viovy, Nicolas; Williams, Christopher; Woodward, F Ian; Papale, Dario
2010-08-13
Terrestrial gross primary production (GPP) is the largest global CO(2) flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 +/- 8 petagrams of carbon per year (Pg C year(-1)) using eddy covariance flux data and various diagnostic models. Tropical forests and savannahs account for 60%. GPP over 40% of the vegetated land is associated with precipitation. State-of-the-art process-oriented biosphere models used for climate predictions exhibit a large between-model variation of GPP's latitudinal patterns and show higher spatial correlations between GPP and precipitation, suggesting the existence of missing processes or feedback mechanisms which attenuate the vegetation response to climate. Our estimates of spatially distributed GPP and its covariation with climate can help improve coupled climate-carbon cycle process models.
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
NASA Astrophysics Data System (ADS)
Sweeney, C.; Ryerson, T. B.; Karion, A.; Peischl, J.; Petron, G.; Schnell, R. C.; Tsai, T.; Crosson, E.; Rella, C.; Trainer, M.; Frost, G. J.; Hardesty, R. M.; Montzka, S. A.; Dlugokencky, E. J.; Tans, P. P.
2013-12-01
New extraction technologies are making natural gas from shale and tight sand gas reservoirs in the United States (US) more accessible. As a result, the US has become the largest producer of natural gas in the world. This growth in natural gas production may result in increased leakage of methane, a potent greenhouse gas, offsetting the climate benefits of natural gas relative to other fossil fuels. Methane emissions from natural gas production are not well quantified because of the large variety of potential sources, the variability in production and operating practices, the uneven distribution of emitters, and a lack of verification of emission inventories with direct atmospheric measurements. Researchers at the NOAA Earth System Research Laboratory (ESRL) have used simple mass balance approaches to estimate emissions of CH4 from several natural gas and oil plays across the US. We will summarize the results of the available aircraft and ground-based atmospheric emissions estimates to better understand the spatial and temporal distribution of these emissions in the US.
Emissions of CH4 from natural gas production in the United States using aircraft-based observations
NASA Astrophysics Data System (ADS)
Sweeney, Colm; Karion, Anna; Petron, Gabrielle; Ryerson, Thomas; Peischl, Jeff; Trainer, Michael; Rella, Chris; Hardesty, Michael; Crosson, Eric; Montzka, Stephen; Tans, Pieter; Shepson, Paul; Kort, Eric
2014-05-01
New extraction technologies are making natural gas from shale and tight sand gas reservoirs in the United States (US) more accessible. As a result, the US has become the largest producer of natural gas in the world. This growth in natural gas production may result in increased leakage of methane, a potent greenhouse gas, offsetting the climate benefits of natural gas relative to other fossil fuels. Methane emissions from natural gas production are not well quantified because of the large variety of potential sources, the variability in production and operating practices, the uneven distribution of emitters, and a lack of verification of emission inventories with direct atmospheric measurements. Researchers at the NOAA Earth System Research Laboratory (ESRL) have used simple mass balance approaches in combination with isotopes and light alkanes to estimate emissions of CH4 from several natural gas and oil plays across the US. We will summarize the results of the available aircraft and ground-based atmospheric emissions estimates to better understand the spatial and temporal distribution of these emissions in the US.
Distributive routing and congestion control in wireless multihop ad hoc communication networks
NASA Astrophysics Data System (ADS)
Glauche, Ingmar; Krause, Wolfram; Sollacher, Rudolf; Greiner, Martin
2004-10-01
Due to their inherent complexity, engineered wireless multihop ad hoc communication networks represent a technological challenge. Having no mastering infrastructure the nodes have to selforganize themselves in such a way that for example network connectivity, good data traffic performance and robustness are guaranteed. In this contribution the focus is on routing and congestion control. First, random data traffic along shortest path routes is studied by simulations as well as theoretical modeling. Measures of congestion like end-to-end time delay and relaxation times are given. A scaling law of the average time delay with respect to network size is revealed and found to depend on the underlying network topology. In the second step, a distributive routing and congestion control is proposed. Each node locally propagates its routing cost estimates and information about its congestion state to its neighbors, which then update their respective cost estimates. This allows for a flexible adaptation of end-to-end routes to the overall congestion state of the network. Compared to shortest-path routing, the critical network load is significantly increased.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Economic costs of nonmedical use of prescription opioids.
Hansen, Ryan N; Oster, Gerry; Edelsberg, John; Woody, George E; Sullivan, Sean D
2011-01-01
Although the economic costs of substance misuse have been extensively examined in the published literature, information on the costs of nonmedical use of prescription opioids is much more limited, despite being a significant and rapidly growing problem in the United States. We estimated the current economic burden of nonmedical use of prescription opioids in the United States in terms of direct substance abuse treatment, medical complications, productivity loss, and criminal justice. We distributed our broad cost estimates among the various drugs of misuse, including prescription opioids, down to the individual drug level. In 2006, the estimated total cost in the United States of nonmedical use of prescription opioids was $53.4 billion, of which $42 billion (79%) was attributable to lost productivity, $8.2 billion (15%) to criminal justice costs, $2.2 billion (4%) to drug abuse treatment, and $944 million to medical complications (2%). Five drugs--OxyContin, oxycodone, hydrocodone, propoxyphene, and methadone--accounted for two-thirds of the total economic burden. The economic cost of nonmedical use of prescription opioids in the United States totals more than $50 billion annually; lost productivity and crime account for the vast majority (94%) of these costs.
Estrada-Peña, A
1998-11-01
Geostatistics (cokriging) was used to model the cross-correlated information between satellite-derived vegetation and climate variables and the distribution of the tick Ixodes scapularis (Say) in the Nearctic. Output was used to map the habitat suitability for I. scapularis on a continental scale. A data base of the localities where I. scapularis was collected in the United States and Canada was developed from a total of 346 published and geocoded records. This data base was cross-correlated with satellite pictures from the advanced very high resolution radiometer sensor obtained from 1984 to 1994 on the Nearctic at 10-d intervals, with a resolution of 8 km per pixel. Eight climate and vegetation variables were tabulated from this imagery. A cokriging system was generated to exploit satellite-derived data and to estimate the distribution of I. scapularis. Results obtained using 2 vegetation (standard NDVI) and 4 temperature variables closely agreed with actual records of the tick, with a sensitivity of 0.97 and a specificity of 0.89, with 6 and 4% of false-positive and false-negative sites, respectively. Such statistical analysis can be used to guide field work toward the correct interpretation of the distribution limits of I. scapularis and can also be used to make predictions about the impact of global change on tick range.
Trotta-Moreu, Nuria; Lobo, Jorge M
2010-02-01
Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.
Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states
NASA Astrophysics Data System (ADS)
de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.
2015-12-01
Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
One-sided measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Wen-Fei; Zhen, Yi-Zheng; Zheng, Yu-Lin; Li, Li; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai
2018-01-01
Measurement-device-independent quantum key distribution (MDI-QKD) protocol was proposed to remove all the detector side channel attacks, while its security relies on the trusted encoding systems. Here we propose a one-sided MDI-QKD (1SMDI-QKD) protocol, which enjoys detection loophole-free advantage, and at the same time weakens the state preparation assumption in MDI-QKD. The 1SMDI-QKD can be regarded as a modified MDI-QKD, in which Bob's encoding system is trusted, while Alice's is uncharacterized. For the practical implementation, we also provide a scheme by utilizing coherent light source with an analytical two decoy state estimation method. Simulation with realistic experimental parameters shows that the protocol has a promising performance, and thus can be applied to practical QKD applications.
Maps showing gas-hydrate distribution off the east coast of the United States
Dillon, William P.; Fehlhaber, Kristen L.; Coleman, Dwight F.; Lee, Myung W.; Hutchinson, Deborah R.
1995-01-01
These maps present the inferred distribution of natural-gas hydrate within the sediments of the eastern United States continental margin (Exclusive Economic Zone) in the offshore region from Georgia to New Jersey (fig. 1). The maps, which were created on the basis of seismic interpretations, represent the first attempt to map volume estimates for gas hydrate. Gas hydrate forms a large reservoir for methane in oceanic sediments. Therefore it potentially may represent a future source of energy and it may influence climate change because methane is a very effective greenhouse gas. Hydrate breakdown probably is a controlling factor for sea-floor landslides, and its presence has significant effect on the acoustic velocity of sea-floor sediments.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
State and parameter estimation of the heat shock response system using Kalman and particle filters.
Liu, Xin; Niranjan, Mahesan
2012-06-01
Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock
The distribution of common construction materials at risk to acid deposition in the United States
NASA Astrophysics Data System (ADS)
Lipfert, Frederick W.; Daum, Mary L.
Information on the geographic distribution of various types of exposed materials is required to estimate the economic costs of damage to construction materials from acid deposition. This paper focuses on the identification, evaluation and interpretation of data describing the distributions of exterior construction materials, primarily in the United States. This information could provide guidance on how data needed for future economic assessments might be acquired in the most cost-effective ways. Materials distribution surveys from 16 cities in the U.S. and Canada and five related databases from government agencies and trade organizations were examined. Data on residential buildings are more commonly available than on nonresidential buildings; little geographically resolved information on distributions of materials in infrastructure was found. Survey results generally agree with the appropriate ancillary databases, but the usefulness of the databases is often limited by their coarse spatial resolution. Information on those materials which are most sensitive to acid deposition is especially scarce. Since a comprehensive error analysis has never been performed on the data required for an economic assessment, it is not possible to specify the corresponding detailed requirements for data on the distributions of materials.
Estimating mercury emissions resulting from wildfire in forests of the Western United States.
Webster, Jackson P; Kane, Tyler J; Obrist, Daniel; Ryan, Joseph N; Aiken, George R
2016-10-15
Understanding the emissions of mercury (Hg) from wildfires is important for quantifying the global atmospheric Hg sources. Emissions of Hg from soils resulting from wildfires in the Western United States was estimated for the 2000 to 2013 period, and the potential emission of Hg from forest soils was assessed as a function of forest type and soil-heating. Wildfire released an annual average of 3100±1900kg-Hgy(-1) for the years spanning 2000-2013 in the 11 states within the study area. This estimate is nearly 5-fold lower than previous estimates for the study region. Lower emission estimates are attributed to an inclusion of fire severity within burn perimeters. Within reported wildfire perimeters, the average distribution of low, moderate, and high severity burns was 52, 29, and 19% of the total area, respectively. Review of literature data suggests that that low severity burning does not result in soil heating, moderate severity fire results in shallow soil heating, and high severity fire results in relatively deep soil heating (<5cm). Using this approach, emission factors for high severity burns ranged from 58 to 640μg-Hgkg-fuel(-1). In contrast, low severity burns have emission factors that are estimated to be only 18-34μg-Hgkg-fuel(-1). In this estimate, wildfire is predicted to release 1-30gHgha(-1) from Western United States forest soils while above ground fuels are projected to contribute an additional 0.9 to 7.8gHgha(-1). Land cover types with low biomass (desert scrub) are projected to release less than 1gHgha(-1). Following soil sources, fuel source contributions to total Hg emissions generally followed the order of duff>wood>foliage>litter>branches. Copyright © 2016 Elsevier B.V. All rights reserved.
Roohi, Shahrokh; Grinnell, Margaret; Sandoval, Michelle; Cohen, Nicole J.; Crocker, Kimberly; Allen, Christopher; Dougherty, Cindy; Jolly, Julian; Pesik, Nicki
2018-01-01
The Centers for Disease Control and Prevention (CDC) Quarantine Stations distribute select lifesaving drug products that are not commercially available or are in limited supply in the United States for emergency treatment of certain health conditions. Following a retrospective analysis of shipment records, the authors estimated an average of 6.66 hours saved per shipment when drug products were distributed from quarantine stations compared to a hypothetical centralized site from CDC headquarters in Atlanta, GA. This evaluation supports the continued use of a decentralized model which leverages CDC's regional presence and maximizes efficiency in the distribution of lifesaving drugs. PMID:25779896
Roohi, Shahrokh; Grinnell, Margaret; Sandoval, Michelle; Cohen, Nicole J; Crocker, Kimberly; Allen, Christopher; Dougherty, Cindy; Jolly, Julian; Pesik, Nicki
2015-01-01
The Centers for Disease Control and Prevention (CDC) Quarantine Stations distribute select lifesaving drug products that are not commercially available or are in limited supply in the United States for emergency treatment of certain health conditions. Following a retrospective analysis of shipment records, the authors estimated an average of 6.66 hours saved per shipment when drug products were distributed from quarantine stations compared to a hypothetical centralized site from CDC headquarters in Atlanta, GA. This evaluation supports the continued use of a decentralized model which leverages CDC's regional presence and maximizes efficiency in the distribution of lifesaving drugs.
Grizzly bear density in Glacier National Park, Montana
Kendall, K.C.; Stetz, J.B.; Roon, David A.; Waits, L.P.; Boulanger, J.B.; Paetkau, David
2008-01-01
We present the first rigorous estimate of grizzly bear (Ursus arctos) population density and distribution in and around Glacier National Park (GNP), Montana, USA. We used genetic analysis to identify individual bears from hair samples collected via 2 concurrent sampling methods: 1) systematically distributed, baited, barbed-wire hair traps and 2) unbaited bear rub trees found along trails. We used Huggins closed mixture models in Program MARK to estimate total population size and developed a method to account for heterogeneity caused by unequal access to rub trees. We corrected our estimate for lack of geographic closure using a new method that utilizes information from radiocollared bears and the distribution of bears captured with DNA sampling. Adjusted for closure, the average number of grizzly bears in our study area was 240.7 (95% CI = 202–303) in 1998 and 240.6 (95% CI = 205–304) in 2000. Average grizzly bear density was 30 bears/1,000 km2, with 2.4 times more bears detected per hair trap inside than outside GNP. We provide baseline information important for managing one of the few remaining populations of grizzlies in the contiguous United States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, D.; Gurney, Kevin R.; Geethakumar, Sarath
2013-04-01
In this study we present onroad fossil fuel CO2 emissions estimated by the Vulcan Project, an effort quantifying fossil fuel CO2 emissions for the U.S. in high spatial and temporal resolution. This high-resolution data, aggregated at the state-level and classified in broad road and vehicle type categories, is compared to a commonly used national-average approach. We find that the use of national averages incurs state-level biases for road groupings that are almost twice as large as for vehicle groupings. The uncertainty for all groups exceeds the bias, and both quantities are positively correlated with total state emissions. States with themore » largest emissions totals are typically similar to one another in terms of emissions fraction distribution across road and vehicle groups, while smaller-emitting states have a wider range of variation in all groups. Errors in reduction estimates as large as ±60% corresponding to ±0.2 MtC are found for a national-average emissions mitigation strategy focused on a 10% emissions reduction from a single vehicle class, such as passenger gas vehicles or heavy diesel trucks. Recommendations are made for reducing CO2 emissions uncertainty by addressing its main drivers: VMT and fuel efficiency uncertainty.« less
TRMM rainfall estimative coupled with Bell (1969) methodology for extreme rainfall characterization
NASA Astrophysics Data System (ADS)
Schiavo Bernardi, E.; Allasia, D.; Basso, R.; Freitas Ferreira, P.; Tassi, R.
2015-06-01
The lack of rainfall data in Brazil, and, in particular, in Rio Grande do Sul State (RS), hinders the understanding of the spatial and temporal distribution of rainfall, especially in the case of the more complex extreme events. In this context, rainfall's estimation from remote sensors is seen as alternative to the scarcity of rainfall gauges. However, as they are indirect measures, such estimates needs validation. This paper aims to verify the applicability of the Tropical Rainfall Measuring Mission (TRMM) satellite information for extreme rainfall determination in RS. The analysis was accomplished at different temporal scales that ranged from 5 min to daily rainfall while spatial distribution of rainfall was investigated by means of regionalization. An initial test verified TRMM rainfall estimative against measured rainfall at gauges for 1998-2013 period considering different durations and return periods (RP). Results indicated that, for the RP of 2, 5, 10 and 15 years, TRMM overestimated on average 24.7% daily rainfall. As TRMM minimum time-steps is 3 h, in order to verify shorter duration rainfall, the TRMM data were adapted to fit Bell's (1969) generalized IDF formula (based on the existence of similarity between the mechanisms of extreme rainfall events as they are associated to convective cells). Bell`s equation error against measured precipitation was around 5-10%, which varied based on location, RP and duration while the coupled BELL+TRMM error was around 10-35%. However, errors were regionally distributed, allowing a correction to be implemented that reduced by half these values. These findings in turn permitted the use of TRMM+Bell estimates to improve the understanding of spatiotemporal distribution of extreme hydrological rainfall events.
Comparing four methods to estimate usual intake distributions.
Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P
2011-07-01
The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.
NASA Astrophysics Data System (ADS)
Singh, Baljinder; Singh, Janpreet; Kaur, Jagdish; Moudgil, R. K.; Tripathi, S. K.
2016-06-01
Nanocrystalline Cadmium Sulfide (nc-CdS) thin films have been prepared on well-cleaned glass substrate at room temperature (300 K) by thermal evaporation technique using inert gas condensation (IGC) method. X-ray diffraction (XRD) analysis reveals that the films crystallize in hexagonal structure with preferred orientation along [002] direction. Scanning electron microscope (SEM) and Transmission electron microscope (TEM) studies reveal that grains are spherical in shape and uniformly distributed over the glass substrates. The optical band gap of the film is estimated from the transmittance spectra. Electrical parameters such as Hall coefficient, carrier type, carrier concentration, resistivity and mobility are determined using Hall measurements at 300 K. Transit time and mobility are estimated from Time of Flight (TOF) transient photocurrent technique in gap cell configuration. The measured values of electron drift mobility from TOF and Hall measurements are of the same order. Constant Photocurrent Method in ac-mode (ac-CPM) is used to measure the absorption spectra in low absorption region. By applying derivative method, we have converted the measured absorption data into a density of states (DOS) distribution in the lower part of the energy gap. The value of Urbach energy, steepness parameter and density of defect states have been calculated from the absorption and DOS spectra.
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
Final Technical Report Power through Policy: "Best Practices" for Cost-Effective Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhoads-Weaver, Heather; Gagne, Matthew; Sahl, Kurt
2012-02-28
Power through Policy: 'Best Practices' for Cost-Effective Distributed Wind is a U.S. Department of Energy (DOE)-funded project to identify distributed wind technology policy best practices and to help policymakers, utilities, advocates, and consumers examine their effectiveness using a pro forma model. Incorporating a customized feed from the Database of State Incentives for Renewables and Efficiency (DSIRE), the Web-based Distributed Wind Policy Comparison Tool (Policy Tool) is designed to assist state, local, and utility officials in understanding the financial impacts of different policy options to help reduce the cost of distributed wind technologies. The project's final products include the Distributed Windmore » Policy Comparison Tool, found at www.windpolicytool.org, and its accompanying documentation: Distributed Wind Policy Comparison Tool Guidebook: User Instructions, Assumptions, and Case Studies. With only two initial user inputs required, the Policy Tool allows users to adjust and test a wide range of policy-related variables through a user-friendly dashboard interface with slider bars. The Policy Tool is populated with a variety of financial variables, including turbine costs, electricity rates, policies, and financial incentives; economic variables including discount and escalation rates; as well as technical variables that impact electricity production, such as turbine power curves and wind speed. The Policy Tool allows users to change many of the variables, including the policies, to gauge the expected impacts that various policy combinations could have on the cost of energy (COE), net present value (NPV), internal rate of return (IRR), and the simple payback of distributed wind projects ranging in size from 2.4 kilowatts (kW) to 100 kW. The project conducted case studies to demonstrate how the Policy Tool can provide insights into 'what if' scenarios and also allow the current status of incentives to be examined or defended when necessary. The ranking of distributed wind state policy and economic environments summarized in the attached report, based on the Policy Tool's default COE results, highlights favorable market opportunities for distributed wind growth as well as market conditions ripe for improvement. Best practices for distributed wind state policies are identified through an evaluation of their effect on improving the bottom line of project investments. The case studies and state rankings were based on incentives, power curves, and turbine pricing as of 2010, and may not match the current results from the Policy Tool. The Policy Tool can be used to evaluate the ways that a variety of federal and state policies and incentives impact the economics of distributed wind (and subsequently its expected market growth). It also allows policymakers to determine the impact of policy options, addressing market challenges identified in the U.S. DOE's '20% Wind Energy by 2030' report and helping to meet COE targets. In providing a simple and easy-to-use policy comparison tool that estimates financial performance, the Policy Tool and guidebook are expected to enhance market expansion by the small wind industry by increasing and refining the understanding of distributed wind costs, policy best practices, and key market opportunities in all 50 states. This comprehensive overview and customized software to quickly calculate and compare policy scenarios represent a fundamental step in allowing policymakers to see how their decisions impact the bottom line for distributed wind consumers, while estimating the relative advantages of different options available in their policy toolboxes. Interested stakeholders have suggested numerous ways to enhance and expand the initial effort to develop an even more user-friendly Policy Tool and guidebook, including the enhancement and expansion of the current tool, and conducting further analysis. The report and the project's Guidebook include further details on possible next steps. NREL Report No. BK-5500-53127; DOE/GO-102011-3453.« less
Agricultural pesticide emissions associated with common crops in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benjey, W.G.
Annual emissions for the year 1987 from the application of agricultural pesticides have been estimated by crop type by county for the United States using a geographic information system. The emissions estimates are based upon computed volatilization rates accounting for the properties of each pesticide, evaporation rates, mode of application (surface or soil incorporation) and percent of interception by leaves. Key pesticide properties include the Henry's Law constant, half-life in soil and the organic carbon partitioning coefficient. The volatilization rates are multiplied by the amount of pesticide applied by crop acreage in each county as determined from agricultural census andmore » pesticide sales data. The geographic distribution of the dominant emissions, such as atrazine and diazinon, etc. are presented by crop type and state. For a given pesticide, the geographic variability is controlled principally by amount applied and water availability as reflected in evaporation rates.« less
Method for Estimating Water Withdrawals for Livestock in the United States, 2005
Lovelace, John K.
2009-01-01
Livestock water use includes ground water and surface water associated with livestock watering, feedlots, dairy operations, and other on-farm needs. The water may be used for drinking, cooling, sanitation, waste disposal, and other needs related to the animals. Estimates of water withdrawals for livestock are needed for water planning and management. This report documents a method used to estimate withdrawals of fresh ground water and surface water for livestock in 2005 for each county and county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands. Categories of livestock included dairy cattle, beef and other cattle, hogs and pigs, laying hens, broilers and other chickens, turkeys, sheep and lambs, all goats, and horses (including ponies, mules, burros, and donkeys). Use of the method described in this report could result in more consistent water-withdrawal estimates for livestock that can be used by water managers and planners to determine water needs and trends across the United States. Water withdrawals for livestock in 2005 were estimated by using water-use coefficients, in gallons per head per day for each animal type, and livestock-population data. Coefficients for various livestock for most States were obtained from U.S. Geological Survey water-use program personnel or U.S. Geological Survey water-use publications. When no coefficient was available for an animal type in a State, the median value of reported coefficients for that animal was used. Livestock-population data were provided by the National Agricultural Statistics Service. County estimates were further divided into ground-water and surface-water withdrawals for each county and county equivalent. County totals from 2005 were compared to county totals from 1995 and 2000. Large deviations from 1995 or 2000 livestock withdrawal estimates were investigated and generally were due to comparison with reported withdrawals, differences in estimation techniques, differences in livestock coefficients, or use of livestock-population data from different sources. The results of this study were distributed to U.S. Geological Survey water-use program personnel in each State during 2007. Water-use program personnel are required to submit estimated withdrawals for all categories of use in their States to the National Water-Use Information Program for inclusion in a national report describing water use in the United States during 2005. Water-use program personnel had the option of submitting these estimates, a modified version of these estimates, or their own set of estimates or reported data. Estimated withdrawals resulting from the method described in this report are not presented herein to avoid potential inconsistencies with estimated withdrawals for livestock that will be presented in the national report, as different methods used by water-use personnel may result in different withdrawal estimates. Estimated withdrawals also are not presented to avoid potential disclosure of data for individual livestock operations.
Lhila, Aparna
2009-10-01
This paper estimates the relationship between state and county income inequality and low birthweight (LBW) in the U.S. It examines whether more unequal societies are also less healthy because such societies have lower investment in population health. The model includes an extensive list of community and individual controls and community fixed-effects. Results show that unequal states in fact have greater social investments, and absent these investments children born in such states would be more likely to be LBW. Using alternate measures of inequality reveals that income inequality in the upper tail of the income distribution is not related to LBW; but inequality in the lower tail of the income distribution is associated with increased LBW where the supply of healthcare mitigates the effect of income inequality. Consistent with prior findings, county income inequality is not significantly related to LBW.
Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...
2016-06-07
The Magneli phase Ti 4O 7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation.more » Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Road March Performance of Special Operations Soldiers Carrying Various Loads and Load Distributions
1993-01-01
groups were used (Ramos and Knaplk, 1979; Knapik et al,, 1980 ; Hermansen et al., 1972), In the hand-grip test, the 7 soldier, in a seated position...Inventory (DIshman et al,, 1980 ). The POMS was a 65-item questionnaire which provided measures of six mood states, Soldiers scored each item on a five-point...estimates require individual calibration (Acheson et al., 1980 ) and heart rate can be influenced by a number of factors including training state (Saltin
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
NASA Astrophysics Data System (ADS)
Marlon, J. R.; Howe, P. D.; Leiserowitz, A.
2013-12-01
For climate change communication to be most effective, messages should be targeted to the characteristics of local audiences. In the U.S., 'Six Americas' have been identified among the public based on their response to the climate change issue. The distribution of these different 'publics' varies between states and communities, yet data about public opinion at the sub-national scale remains scarce. In this presentation, we describe a methodology to statistically downscale results from national-level surveys about the Six Americas, climate literacy, and other aspects of public opinion to smaller areas, including states, metropolitan areas, and counties. The method utilizes multilevel regression with poststratification (MRP) to model public opinion at various scales using a large national-level survey dataset. We present state and county-level estimates of two key beliefs about climate change: belief that climate change is happening, and belief in the scientific consensus about climate change. We further present estimates of how the Six Americas vary across the U.S.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan
2017-08-01
The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
Methods for Estimating Water Withdrawals for Aquaculture in the United States, 2005
Lovelace, John K.
2009-01-01
Aquaculture water use is associated with raising organisms that live in water - such as finfish and shellfish - for food, restoration, conservation, or sport. Aquaculture production occurs under controlled feeding, sanitation, and harvesting procedures primarily in ponds, flow-through raceways, and, to a lesser extent, cages, net pens, and tanks. Aquaculture ponds, raceways, and tanks usually require the withdrawal or diversion of water from a ground or surface source. Most water withdrawn or diverted for aquaculture production is used to maintain pond levels and/or water quality. Water typically is added for maintenance of levels, oxygenation, temperature control, and flushing of wastes. This report documents methods used to estimate withdrawals of fresh ground water and surface water for aqua-culture in 2005 for each county and county-equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands by using aquaculture statistics and estimated water-use coefficients and water-replacement rates. County-level data for commercial and noncommercial operations compiled for the 2005 Census of Aquaculture were obtained from the National Agricultural Statistics Service. Withdrawals of water used at commercial and noncommercial operations for aquaculture ponds, raceways, tanks, egg incubators, and pens and cages for alligators were estimated and totaled by ground-water or surface-water source for each county and county equivalent. Use of the methods described in this report, when measured or reported data are unavailable, could result in more consistent water-withdrawal estimates for aquaculture that can be used by water managers and planners to determine water needs and trends across the United States. The results of this study were distributed to U.S. Geological Survey water-use personnel in each State during 2007. Water-use personnel are required to submit estimated withdrawals for all categories of use in their State to the U.S. Geological Survey National Water-Use Information Program for inclusion in a national report describing water use in the United States during 2005. Water-use personnel had the option of submitting the estimates determined by using the methods described in this report, a modified version of these estimates, their own set of estimates, or reported data for the aquaculture category. Estimated withdrawals resulting from the method described in this report are not presented herein to avoid potential inconsistencies with estimated withdrawals for aquaculture that will be presented in the national report, as different methods used by water-use personnel may result in different withdrawal estimates. Estimated withdrawals also are not presented to avoid potential disclosure of confidential information for individual aquaculture operations.
Jaruzelska, J; Zietkiewicz, E; Batzer, M; Cole, D E; Moisan, J P; Scozzari, R; Tavaré, S; Labuda, D
1999-01-01
With 10 segregating sites (simple nucleotide polymorphisms) in the last intron (1089 bp) of the ZFX gene we have observed 11 haplotypes in 336 chromosomes representing a worldwide array of 15 human populations. Two haplotypes representing 77% of all chromosomes were distributed almost evenly among four continents. Five of the remaining haplotypes were detected in Africa and 4 others were restricted to Eurasia and the Americas. Using the information about the ancestral state of the segregating positions (inferred from human-great ape comparisons), we applied coalescent analysis to estimate the age of the polymorphisms and the resulting haplotypes. The oldest haplotype, with the ancestral alleles at all the sites, was observed at low frequency only in two groups of African origin. Its estimated age of 740 to 1100 kyr corresponded to the time to the most recent common ancestor. The two most frequent worldwide distributed haplotypes were estimated at 550 to 840 and 260 to 400 kyr, respectively, while the age of the continentally restricted polymorphisms was 120 to 180 kyr and smaller. Comparison of spatial and temporal distribution of the ZFX haplotypes suggests that modern humans diverged from the common ancestral stock in the Middle Paleolithic era. Subsequent range expansion prevented substantial gene flow among continents, separating African groups from populations that colonized Eurasia and the New World. PMID:10388827
Jaruzelska, J; Zietkiewicz, E; Batzer, M; Cole, D E; Moisan, J P; Scozzari, R; Tavaré, S; Labuda, D
1999-07-01
With 10 segregating sites (simple nucleotide polymorphisms) in the last intron (1089 bp) of the ZFX gene we have observed 11 haplotypes in 336 chromosomes representing a worldwide array of 15 human populations. Two haplotypes representing 77% of all chromosomes were distributed almost evenly among four continents. Five of the remaining haplotypes were detected in Africa and 4 others were restricted to Eurasia and the Americas. Using the information about the ancestral state of the segregating positions (inferred from human-great ape comparisons), we applied coalescent analysis to estimate the age of the polymorphisms and the resulting haplotypes. The oldest haplotype, with the ancestral alleles at all the sites, was observed at low frequency only in two groups of African origin. Its estimated age of 740 to 1100 kyr corresponded to the time to the most recent common ancestor. The two most frequent worldwide distributed haplotypes were estimated at 550 to 840 and 260 to 400 kyr, respectively, while the age of the continentally restricted polymorphisms was 120 to 180 kyr and smaller. Comparison of spatial and temporal distribution of the ZFX haplotypes suggests that modern humans diverged from the common ancestral stock in the Middle Paleolithic era. Subsequent range expansion prevented substantial gene flow among continents, separating African groups from populations that colonized Eurasia and the New World.
Borzilov, V A
1993-11-01
Development of requirements for a data bank for natural media as a system of intercorrelated parameters to estimate system states are determined. The problems of functional agreement between experimental and calculation methods are analysed when organizing the ecological monitoring. The methods of forming the environmental specimen bank to estimate and forecast radioactive contamination and exposure dose are considered to be exemplified by the peculiarities of the spatial distribution of radioactive contamination in fields. Analysed is the temporal dynamics of contamination for atmospheric air, soil and water.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Kocher, David C; Apostoaei, A Iulian; Hoffman, F Owen; Trabalka, John R
2018-06-01
This paper presents an analysis to develop a subjective state-of-knowledge probability distribution of a dose and dose-rate effectiveness factor for use in estimating risks of solid cancers from exposure to low linear energy transfer radiation (photons or electrons) whenever linear dose responses from acute and chronic exposure are assumed. A dose and dose-rate effectiveness factor represents an assumption that the risk of a solid cancer per Gy at low acute doses or low dose rates of low linear energy transfer radiation, RL, differs from the risk per Gy at higher acute doses, RH; RL is estimated as RH divided by a dose and dose-rate effectiveness factor, where RH is estimated from analyses of dose responses in Japanese atomic-bomb survivors. A probability distribution to represent uncertainty in a dose and dose-rate effectiveness factor for solid cancers was developed from analyses of epidemiologic data on risks of incidence or mortality from all solid cancers as a group or all cancers excluding leukemias, including (1) analyses of possible nonlinearities in dose responses in atomic-bomb survivors, which give estimates of a low-dose effectiveness factor, and (2) comparisons of risks in radiation workers or members of the public from chronic exposure to low linear energy transfer radiation at low dose rates with risks in atomic-bomb survivors, which give estimates of a dose-rate effectiveness factor. Probability distributions of uncertain low-dose effectiveness factors and dose-rate effectiveness factors for solid cancer incidence and mortality were combined using assumptions about the relative weight that should be assigned to each estimate to represent its relevance to estimation of a dose and dose-rate effectiveness factor. The probability distribution of a dose and dose-rate effectiveness factor for solid cancers developed in this study has a median (50th percentile) and 90% subjective confidence interval of 1.3 (0.47, 3.6). The harmonic mean is 1.1, which implies that the arithmetic mean of an uncertain estimate of the risk of a solid cancer per Gy at low acute doses or low dose rates of low linear energy transfer radiation is only about 10% less than the mean risk per Gy at higher acute doses. Data were also evaluated to define a low acute dose or low dose rate of low linear energy transfer radiation, i.e., a dose or dose rate below which a dose and dose-rate effectiveness factor should be applied in estimating risks of solid cancers.
Estimating the breeding population of long-billed curlew in the United States
Stanley, T.R.; Skagen, S.K.
2007-01-01
Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.
Spatial and Temporal Influences on Carbon Storage in Hydric Soils of the Conterminous United States
NASA Astrophysics Data System (ADS)
Sundquist, E. T.; Ackerman, K.; Bliss, N.; Griffin, R.; Waltman, S.; Windham-Myers, L.
2016-12-01
Defined features of hydric soils persist over extensive areas of the conterminous United States (CUS) long after their hydric formation conditions have been altered by historical changes in land and water management. These legacy hydric features may represent previous wetland environments in which soil carbon storage was significantly higher before the influence of human activities. We hypothesize that historical alterations of hydric soil carbon storage can be approximated using carefully selected estimates of carbon storage in currently identified hydric soils. Using the Soil Survey Geographic (SSURGO) database, we evaluate carbon storage in identified hydric soil components that are subject to discrete ranges of current or recent conditions of flooding, ponding, and other indicators of hydric and non-hydric soil associations. We check our evaluations and, where necessary, adjust them using independently published soil data. We compare estimates of soil carbon storage under various hydric and non-hydric conditions within proximal landscapes and similar biophysical settings and ecosystems. By combining these setting- and ecosystem-constrained comparisons with the spatial distribution and attributes of wetlands in the National Wetlands Inventory, we impute carbon storage estimates for soils that occur in current wetlands and for hydric soils that are not associated with current wetlands. Using historical data on land use and water control structures, we map the spatial and temporal distribution of past changes in land and water management that have affected hydric soils. We combine these maps with our imputed carbon storage estimates to calculate ranges of values for historical and present-day carbon storage in hydric soils throughout the CUS. These estimates may provide useful constraints for projections of potential carbon storage in hydric soils under future conditions.
North Carolina, 2011 forest inventory and analysis factsheet
Mark J. Brown; Barry D. New
2013-01-01
Forest Inventory and Analysis (FIA) factsheets are produced periodically to keep the public updated on the extent and condition of forest lands in each State. Estimates in the factsheets are based upon data collected from thousands of sample plots distributed across the landscape in a systematic manner. In North Carolina, this process is a collaborative effort between...
75 FR 33891 - Proposed Collection; Comment Request for REG 209446-82 (TD 8852)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... to take into account their pro rata share of separately stated items of the S corporation and non... pro rata share, whether or not distributed, of the S corporation's items of income, loss, deduction... forms of information technology; and (e) estimates of capital or start-up costs and costs of operation...
78 FR 48771 - Proposed Collection; Comment Request for REG 209446-82 (TD 8852)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-09
... corporation to take into account their pro rata share of separately stated items of the S corporation and non... pro rata share, whether or not distributed, of the S corporation's items of income, loss, deduction... forms of information technology; and (e) estimates of capital or start-up costs and costs of operation...
75 FR 38868 - Proposed Collection; Comment Request for REG 209446-82 (TD 8852)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... to take into account their pro rata share of separately stated items of the S corporation and non... pro rata share, whether or not distributed, of the S corporation's items of income, loss, deduction... forms of information technology; and (e) estimates of capital or start-up costs and costs of operation...
An outdoor recreation use model with applications to evaluating survey estimators
Stanley J. Zarnoch; Donald B.K. English; Susan M. Kocis
2005-01-01
An outdoor recreation use simulator (ORUS) has been developed to simulate recreation survey data currently being obtained by the U.S. Department of Agriculture Forest Service, National Visitor Use Monitoring (NVUM) programâs survey of the national forests of the United States. Statistical distributions represent the various behaviors of recreationists during their...
Sampling and estimation procedures for the vegetation diversity and structure indicator
Bethany K. Schulz; William A. Bechtold; Stanley J. Zarnoch
2009-01-01
The Vegetation Diversity and Structure Indicator (VEG) is an extensive inventory of vascular plants in the forests of the United States. The VEG indicator provides baseline data to assess trends in forest vascular plant species richness and composition, and the relative abundance and spatial distribution of those species, including invasive and introduced species. The...
Statistical population based estimates of water ingestion play a vital role in many types of exposure and risk analysis. A significant large scale analysis of water ingestion by the population of the United States was recently completed and is documented in the report titled ...
Florida, 2011-forest inventory and analysis factsheet
Mark J. Brown; Jarek Nowak
2013-01-01
Forest Inventory and Analysis (FIA) factsheets are produced periodically to keep the public up to date on the extent and condition of the forest lands in each State. The forestrelated estimates in the factsheets are based upon data collected from thousands of sample plots distributed across the landscape in a systematic manner. The total number of these plots is...
NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States
NASA Astrophysics Data System (ADS)
Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.
2013-12-01
NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.
NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States
NASA Astrophysics Data System (ADS)
Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.
2011-12-01
NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
NASA Astrophysics Data System (ADS)
Rios, J. Fernando; Ye, Ming; Wang, Liying; Lee, Paul Z.; Davis, Hal; Hicks, Rick
2013-03-01
Onsite wastewater treatment systems (OWTS), or septic systems, can be a significant source of nitrates in groundwater and surface water. The adverse effects that nitrates have on human and environmental health have given rise to the need to estimate the actual or potential level of nitrate contamination. With the goal of reducing data collection and preparation costs, and decreasing the time required to produce an estimate compared to complex nitrate modeling tools, we developed the ArcGIS-based Nitrate Load Estimation Toolkit (ArcNLET) software. Leveraging the power of geographic information systems (GIS), ArcNLET is an easy-to-use software capable of simulating nitrate transport in groundwater and estimating long-term nitrate loads from groundwater to surface water bodies. Data requirements are reduced by using simplified models of groundwater flow and nitrate transport which consider nitrate attenuation mechanisms (subsurface dispersion and denitrification) as well as spatial variability in the hydraulic parameters and septic tank distribution. ArcNLET provides a spatial distribution of nitrate plumes from multiple septic systems and a load estimate to water bodies. ArcNLET's conceptual model is divided into three sub-models: a groundwater flow model, a nitrate transport and fate model, and a load estimation model which are implemented as an extension to ArcGIS. The groundwater flow model uses a map of topography in order to generate a steady-state approximation of the water table. In a validation study, this approximation was found to correlate well with a water table produced by a calibrated numerical model although it was found that the degree to which the water table resembles the topography can vary greatly across the modeling domain. The transport model uses a semi-analytical solution to estimate the distribution of nitrate within groundwater, which is then used to estimate a nitrate load using a mass balance argument. The estimates given by ArcNLET are suitable for a screening-level analysis.
Impact of data assimilation on Eulerian versus Lagrangian estimates of upper ocean transport
NASA Astrophysics Data System (ADS)
Sperrevik, Ann Kristin; Röhrs, Johannes; Christensen, Kai Hâkon
2017-07-01
Using four-dimensional variational analysis, we produce an estimate of the state of a coastal region in Northern Norway during the late winter and spring in 1984. We use satellite sea surface temperature and in situ observations from a series of intensive field campaigns, and obtain a more realistic distribution of water masses both in the horizontal and the vertical than a pure downscaling approach can achieve. Although the distribution of Eulerian surface current speeds are similar, we find that they are more variable and less dependent on model bathymetry in our reanalysis compared to a hindcast produced using the same modeling system. Lagrangian drift currents on the other hand are significantly changed, with overall higher kinetic energy levels in the reanalysis than in the hindcast, particularly in the superinertial frequency band.
Boltzmann sampling for an XY model using a non-degenerate optical parametric oscillator network
NASA Astrophysics Data System (ADS)
Takeda, Y.; Tamate, S.; Yamamoto, Y.; Takesue, H.; Inagaki, T.; Utsunomiya, S.
2018-01-01
We present an experimental scheme of implementing multiple spins in a classical XY model using a non-degenerate optical parametric oscillator (NOPO) network. We built an NOPO network to simulate a one-dimensional XY Hamiltonian with 5000 spins and externally controllable effective temperatures. The XY spin variables in our scheme are mapped onto the phases of multiple NOPO pulses in a single ring cavity and interactions between XY spins are implemented by mutual injections between NOPOs. We show the steady-state distribution of optical phases of such NOPO pulses is equivalent to the Boltzmann distribution of the corresponding XY model. Estimated effective temperatures converged to the setting values, and the estimated temperatures and the mean energy exhibited good agreement with the numerical simulations of the Langevin dynamics of NOPO phases.
New service interface for River Forecasting Center derived quantitative precipitation estimates
Blodgett, David L.
2013-01-01
For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-01-01
Data are summarized on retail sales compiled from sales tax records. Contained in this report are retail sales estimates for the 95 counties in the State of Tennessee and 294 cities, towns or parts of towns which are shown in various degrees of detail depending on disclosure restrictions. Number of firms is determined by the total number of reports submitted. Sales and percent distribution of sales are shown for the State of Tennessee and counties by Standard Metropolitan Statistical Area (SMSA) designation and by various county groupings based on the size of largest city. A list of counties by SMSAmore » designation and by size class of largest city is given in the Appendix. The number of firms and estimated retail sales are also shown for 10 business groups.« less
The USNA MIDN Microdosimeter Instrument
NASA Technical Reports Server (NTRS)
Pisacane, V. L.; Ziegler, J. F.; Nelson, M. E.; Dolecek, Q.; Heyne, J.; Veade, T.; Rosenfeld, A. B.; Cucinotta, F. A.; Zaider, M.; Dicello, J. F.
2006-01-01
This paper describes the MIcroDosimetry iNstrument (MIDN) mission now under development at the United States Naval Academy. The instrument is manifested to fly on the MidSTAR-1 spacecraft, which is the second spacecraft to be developed and launched by the Academy s faculty and midshipmen. Launch is scheduled for 1 September 2006 on an ATLAS-5 launch vehicle. MIDN is a rugged, portable, low power, low mass, solid-state microdosimeter designed to measure in real time the energy distributions of energy deposited by radiation in microscopic volumes. The MIDN microdosimeter sensor is a reverse-biased silicon p-n junction array in a Silicon-On-Insulator (SOI) configuration. Microdosimetric frequency distributions as a function of lineal energies determine the radiation quality factors in support of radiation risk estimation to humans.
Daily tornado frequency distributions in the United States
NASA Astrophysics Data System (ADS)
Elsner, J. B.; Jagger, T. H.; Widen, H. M.; Chavas, D. R.
2014-01-01
The authors examine daily tornado counts in the United States over the period 1994-2012 and find strong evidence for a power-law relationship in the distribution frequency. The scaling exponent is estimated at 1.64 (0.019 s.e.) giving a per tornado-day probability of 0.014% (return period of 71 years) that a tornado day produces 145 tornadoes as was observed on 27 April 2011. They also find that the total number of tornadoes by damage category on days with at least one violent tornado follows an exponential rule. On average, the daily number of tornadoes in the next lowest damage category is approximately twice the number in the current category. These findings are important and timely for tornado hazard models and for seasonal and sub-seasonal forecasts of tornado activity.
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
On the probability distribution of daily streamflow in the United States
Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.
2017-01-01
Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Bayesian structural inference for hidden processes.
Strelioff, Christopher C; Crutchfield, James P
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Bayesian structural inference for hidden processes
NASA Astrophysics Data System (ADS)
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Tristram O.; Singh, Nagendra; Marland, Gregg
Carbon dioxide is taken up by agricultural crops and released soon after during the consumption of agricultural commodities. The global net impact of this process on carbon flux to the atmosphere is negligible, but impact on the spatial distribution of carbon dioxide uptake and release across regions and continents is significant. To estimate the consumption and release of carbon by humans over the landscape, we developed a carbon budget for humans in the United States. The budget was derived from food commodity intake data for the US and from algorithms representing the metabolic processing of carbon by humans. Data onmore » consumption, respiration, and waste of carbon by humans were distributed over the US using geospatial population data with a resolution of approximately 450 x 450 m. The average adult in the US contains about 21 kg C and consumes about 67 kg C yr-1 which is balanced by the annual release of about 59 kg C as expired CO2, 7 kg C as feces and urine, and less than 1 kg C as flatus, sweat, and aromatic compounds. In 2000, an estimated 17.2 Tg C were consumed by the US population and 15.2 Tg C were expired to the atmosphere as CO2. Historically, carbon stock in the US human population has increased between 1790-2006 from 0.06 Tg to 5.37 Tg. Displacement and release of total harvested carbon per capita in the US is nearly 12% of per capita fossil fuel emissions. Humans are using, storing, and transporting carbon about the Earth s surface. Inclusion of these carbon dynamics in regional carbon budgets can improve our understanding of carbon sources and sinks.« less