Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse
CHARACTERIZATION OF SMALL ESTUARIES AS A COMPONENT OF A REGIONAL-SCALE MONITORING PROGRAM
Large-scale environmental monitoring programs, such as EPA's Environmental Monitoring and Assessment Program (EMAP), by nature focus on estimating the ecological condition of large geographic areas. Generally missing is the ability to provide estimates of condition of individual ...
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
USDA-ARS?s Scientific Manuscript database
Large-scale crop monitoring and yield estimation are important for both scientific research and practical applications. Satellite remote sensing provides an effective means for regional and global cropland monitoring, particularly in data-sparse regions that lack reliable ground observations and rep...
USDA-ARS?s Scientific Manuscript database
NASA’s SMAP satellite, launched in November of 2014, produces estimates of average volumetric soil moisture at 3, 9, and 36-kilometer scales. The calibration and validation process of these estimates requires the generation of an identically-scaled soil moisture product from existing in-situ networ...
NASA Astrophysics Data System (ADS)
Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.
2011-12-01
Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.
Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations
NASA Technical Reports Server (NTRS)
Kaiser, N.; Davis, M.
1985-01-01
It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping
NASA Astrophysics Data System (ADS)
Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.
2017-12-01
Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrana, Alexandra; Johnson, Matthew C.; Harris, Mary-Jean, E-mail: aterrana@perimeterinstitute.ca, E-mail: mharris8@perimeterinstitute.ca, E-mail: mjohnson@perimeterinstitute.ca
Due to cosmic variance we cannot learn any more about large-scale inhomogeneities from the primary cosmic microwave background (CMB) alone. More information on large scales is essential for resolving large angular scale anomalies in the CMB. Here we consider cross correlating the large-scale kinetic Sunyaev Zel'dovich (kSZ) effect and probes of large-scale structure, a technique known as kSZ tomography. The statistically anisotropic component of the cross correlation encodes the CMB dipole as seen by free electrons throughout the observable Universe, providing information about long wavelength inhomogeneities. We compute the large angular scale power asymmetry, constructing the appropriate transfer functions, andmore » estimate the cosmic variance limited signal to noise for a variety of redshift bin configurations. The signal to noise is significant over a large range of power multipoles and numbers of bins. We present a simple mode counting argument indicating that kSZ tomography can be used to estimate more modes than the primary CMB on comparable scales. A basic forecast indicates that a first detection could be made with next-generation CMB experiments and galaxy surveys. This paper motivates a more systematic investigation of how close to the cosmic variance limit it will be possible to get with future observations.« less
ERIC Educational Resources Information Center
Shin, Hye Sook
2009-01-01
Using data from a nationwide, large-scale experimental study of the effects of a connected classroom technology on student learning in algebra (Owens et al., 2004), this dissertation focuses on challenges that can arise in estimating treatment effects in educational field experiments when samples are highly heterogeneous in terms of various…
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
ERIC Educational Resources Information Center
Johnson, Matthew S.; Jenkins, Frank
2005-01-01
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard
2018-01-01
Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129
Decentralization, stabilization, and estimation of large-scale linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1976-01-01
In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
NASA Astrophysics Data System (ADS)
Flint, A. L.; Flint, L. E.
2010-12-01
The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Estimation of regional-scale groundwater flow properties in the Bengal Basin of India and Bangladesh
Michael, H.A.; Voss, C.I.
2009-01-01
Quantitative evaluation of management strategies for long-term supply of safe groundwater for drinking from the Bengal Basin aquifer (India and Bangladesh) requires estimation of the large-scale hydrogeologic properties that control flow. The Basin consists of a stratified, heterogeneous sequence of sediments with aquitards that may separate aquifers locally, but evidence does not support existence of regional confining units. Considered at a large scale, the Basin may be aptly described as a single aquifer with higher horizontal than vertical hydraulic conductivity. Though data are sparse, estimation of regional-scale aquifer properties is possible from three existing data types: hydraulic heads, 14C concentrations, and driller logs. Estimation is carried out with inverse groundwater modeling using measured heads, by model calibration using estimated water ages based on 14C, and by statistical analysis of driller logs. Similar estimates of hydraulic conductivities result from all three data types; a resulting typical value of vertical anisotropy (ratio of horizontal to vertical conductivity) is 104. The vertical anisotropy estimate is supported by simulation of flow through geostatistical fields consistent with driller log data. The high estimated value of vertical anisotropy in hydraulic conductivity indicates that even disconnected aquitards, if numerous, can strongly control the equivalent hydraulic parameters of an aquifer system. ?? US Government 2009.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Estimating the Effectiveness of Special Education Using Large-Scale Assessment Data
ERIC Educational Resources Information Center
Ewing, Katherine Anne
2009-01-01
The inclusion of students with disabilities in large scale assessment and accountability programs has provided new opportunities to examine the impact of special education services on student achievement. Hanushek, Kain, and Rivkin (1998, 2002) evaluated the effectiveness of special education programs by examining students' gains on a large-scale…
Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.
2017-01-01
Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob; Gonder, Jeff
New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob; Gonder, Jeffrey D
New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
How well can regional fluxes be derived from smaller-scale estimates?
NASA Technical Reports Server (NTRS)
Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.
1992-01-01
Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.
Estimating hourly PM1 concentrations from Himawari-8 aerosol optical depth in China.
Zang, Lin; Mao, Feiyue; Guo, Jianping; Gong, Wei; Wang, Wei; Pan, Zengxin
2018-06-11
Particulate matter with diameter less than 1 μm (PM 1 ) has been found to be closely associated with air quality, climate changes, and even adverse human health. However, a large gap in our knowledge concerning the large-scale distribution and variability of PM 1 remains, which is expected to be bridged with advanced remote-sensing techniques. In this study, a hybrid model called principal component analysis-general regression neural network (PCA-GRNN) is developed to estimate hourly PM 1 concentrations from Himawari-8 aerosol optical depth in combination with coincident ground-based PM 1 measurements in China. Results indicate that the hourly estimated PM 1 concentrations from satellite agree well with the measured values at national scale, with R 2 of 0.65, root-mean-square error (RMSE) of 22.0 μg/m 3 and mean absolute error (MAE) of 13.8 μg/m 3 . On daily and monthly time scales, R 2 increases to 0.70 and 0.81, respectively. Spatially, highly polluted regions of PM 1 are largely located in the North China Plain and Northeast China, in accordance with the distribution of industrialisation and urbanisation. In terms of diurnal variability, PM 1 concentration tends to peak in rush hours during the daytime. PM 1 exhibits distinct seasonality with winter having the largest concentration (31.5±3.5 μg/m 3 ), largely due to peak combustion emissions. We further attempt to estimate PM 2.5 and PM 10 with the proposed method and find that the accuracies of the proposed model for PM 1 and PM 2.5 estimation are significantly higher than that of PM 10 . Our findings suggest that geostationary data is one of the promising data to estimate fine particle concentration on large spatial scale. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
NASA Astrophysics Data System (ADS)
Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Horvath, P.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; Pękala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Śacute; Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Tartare, M.; Taşąu, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zimbres Silva, M.; Ziolkowski, M.
2011-11-01
We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60°, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the ~ 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shown to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Machtans, Craig S.; Thogmartin, Wayne E.
2014-01-01
The publication of a U.S. estimate of bird–window collisions by Loss et al. is an example of the somewhat contentious approach of using extrapolations to obtain large-scale estimates from small-scale studies. We review the approach by Loss et al. and other authors who have published papers on human-induced avian mortality and describe the drawbacks and advantages to publishing what could be considered imperfect science. The main drawback is the inherent and somewhat unquantifiable bias of using small-scale studies to scale up to a national estimate. The direct benefits include development of new methodologies for creating the estimates, an explicit treatment of known biases with acknowledged uncertainty in the final estimate, and the novel results. Other overarching benefits are that these types of papers are catalysts for improving all aspects of the science of estimates and for policies that must respond to the new information.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2010-01-01
One of the major objectives of large-scale educational surveys is reporting trends in academic achievement. For this purpose, a substantial number of items are carried from one assessment cycle to the next. The linking process that places academic abilities measured in different assessments on a common scale is usually based on a concurrent…
Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method
2010-01-25
2010 / Accepted: 19 January 2010 / Published: 25 January 2010 Abstract: Spatial and temporal soil moisture dynamics are critically needed to...scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial... dynamics is essential in the hydrological and meteorological modeling, improves our understanding of land surface–atmosphere interactions. Spatial and
Hydraulic head applications of flow logs in the study of heterogeneous aquifers
Paillet, Frederick L.
2001-01-01
Permeability profiles derived from high-resolution flow logs in heterogeneous aquifers provide a limited sample of the most permeable beds or fractures determining the hydraulic properties of those aquifers. This paper demonstrates that flow logs can also be used to infer the large-scale properties of aquifers surrounding boreholes. The analysis is based on the interpretation of the hydraulic head values estimated from the flow log analysis. Pairs of quasi-steady flow profiles obtained under ambient conditions and while either pumping or injecting are used to estimate the hydraulic head in each water-producing zone. Although the analysis yields localized estimates of transmissivity for a few water-producing zones, the hydraulic head estimates apply to the farfield aquifers to which these zones are connected. The hydraulic head data are combined with information from other sources to identify the large-scale structure of heterogeneous aquifers. More complicated cross-borehole flow experiments are used to characterize the pattern of connection between large-scale aquifer units inferred from the hydraulic head estimates. The interpretation of hydraulic heads in situ under steady and transient conditions is illustrated by several case studies, including an example with heterogeneous permeable beds in an unconsolidated aquifer, and four examples with heterogeneous distributions of bedding planes and/or fractures in bedrock aquifers.
Sensitivity analysis of key components in large-scale hydroeconomic models
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.
2008-12-01
This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.
NASA Technical Reports Server (NTRS)
Poulton, C. E.
1972-01-01
A multiple sampling technique was developed whereby spacecraft photographs supported by aircraft photographs could be used to quantify plant communities. Large scale (1:600 to 1:2,400) color infrared aerial photographs were required to identify shrub and herbaceous species. These photos were used to successfully estimate a herbaceous standing crop biomass. Microdensitometry was used to discriminate among specific plant communities and individual plant species. Large scale infrared photography was also used to estimate mule deer deaths and population density of northern pocket gophers.
Weighing trees with lasers: advances, challenges and opportunities
Boni Vicari, M.; Burt, A.; Calders, K.; Lewis, S. L.; Raumonen, P.; Wilkes, P.
2018-01-01
Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods. PMID:29503726
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
Contribution of aboveground plant respiration to carbon cycling in a Bornean tropical rainforet
NASA Astrophysics Data System (ADS)
Katayama, Ayumi; Tanaka, Kenzo; Ichie, Tomoaki; Kume, Tomonori; Matsumoto, Kazuho; Ohashi, Mizue; Kumagai, Tomo'omi
2014-05-01
Bornean tropical rainforests have a different characteristic from Amazonian tropical rainforests, that is, larger aboveground biomass caused by higher stand density of large trees. Larger biomass may cause different carbon cycling and allocation pattern. However, there are fewer studies on carbon allocation and each component in Bornean tropical rainforests, especially for aboveground plant respiration, compared to Amazonian forests. In this study, we measured woody tissue respiration and leaf respiration, and estimated those in ecosystem scale in a Bornean tropical rainforest. Then, we examined carbon allocation using the data of soil respiration and aboveground net primary production obtained from our previous studies. Woody tissue respiration rate was positively correlated with diameter at breast height (dbh) and stem growth rate. Using the relationships and biomass data, we estimated woody tissue respiration in ecosystem scale though methods of scaling resulted in different estimates values (4.52 - 9.33 MgC ha-1 yr-1). Woody tissue respiration based on surface area (8.88 MgC ha-1 yr-1) was larger than those in Amazon because of large aboveground biomass (563.0 Mg ha-1). Leaf respiration rate was positively correlated with height. Using the relationship and leaf area density data at each 5-m height, leaf respiration in ecosystem scale was estimated (9.46 MgC ha-1 yr-1), which was similar to those in Amazon because of comparable LAI (5.8 m2 m-2). Gross primary production estimated from biometric measurements (44.81 MgC ha-1 yr-1) was much higher than those in Amazon, and more carbon was allocated to woody tissue respiration and total belowground carbon flux. Large tree with dbh > 60cm accounted for about half of aboveground biomass and aboveground biomass increment. Soil respiration was also related to position of large trees, resulting in high soil respiration rate in this study site. Photosynthesis ability of top canopy for large trees was high and leaves for the large trees accounted for 30% of total, which can lead high GPP. These results suggest that large trees play considerable role in carbon cycling and make a distinctive carbon allocation in the Bornean tropical rainforest.
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig
It is argued by extrapolation of general relativity and quantum mechanics that a classical inertial frame corresponds to a statistically defined observable that rotationally fluctuates due to Planck scale indeterminacy. Physical effects of exotic nonlocal rotational correlations on large scale field states are estimated. Their entanglement with the strong interaction vacuum is estimated to produce a universal, statistical centrifugal acceleration that resembles the observed cosmological constant.
USDA-ARS?s Scientific Manuscript database
The Cosmic-ray Soil Moisture Observing System (COSMOS) is a new and innovative method for estimating surface and near surface soil moisture at large (~700 m) scales. This system accounts for liquid water within its measurement volume. Many of the sites used in the early validation of the system had...
USDA-ARS?s Scientific Manuscript database
Accurate estimation of surface energy fluxes at field scale over large areas has the potential to improve agricultural water management in arid and semiarid watersheds. Remote sensing may be the only viable approach for mapping fluxes over heterogeneous landscapes. The Two-Source Energy Balance mode...
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
Upper Washita River experimental watersheds: Multiyear stability of soil water content profiles
USDA-ARS?s Scientific Manuscript database
Scaling in situ soil water content time series data to a large spatial domain is a key element of watershed environmental monitoring and modeling. The primary method of estimating and monitoring large-scale soil water content distributions is via in situ networks. It is critical to establish the s...
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
Extending large-scale forest inventories to assess urban forests.
Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter
2012-03-01
Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
Perry, Jonathan M G; Cooke, Siobhán B; Runestad Connour, Jacqueline A; Burgess, M Loring; Ruff, Christopher B
2018-02-01
Body mass is an important component of any paleobiological reconstruction. Reliable skeletal dimensions for making estimates are desirable but extant primate reference samples with known body masses are rare. We estimated body mass in a sample of extinct platyrrhines and Fayum anthropoids based on four measurements of the articular surfaces of the humerus and femur. Estimates were based on a large extant reference sample of wild-collected individuals with associated body masses, including previously published and new data from extant platyrrhines, cercopithecoids, and hominoids. In general, scaling of joint dimensions is positively allometric relative to expectations of geometric isometry, but negatively allometric relative to expectations of maintaining equivalent joint surface areas. Body mass prediction equations based on articular breadths are reasonably precise, with %SEEs of 17-25%. The breadth of the distal femoral articulation yields the most reliable estimates of body mass because it scales similarly in all major anthropoid taxa. Other joints scale differently in different taxa; therefore, locomotor style and phylogenetic affinity must be considered when calculating body mass estimates from the proximal femur, proximal humerus, and distal humerus. The body mass prediction equations were applied to 36 Old World and New World fossil anthropoid specimens representing 11 taxa, plus two Haitian specimens of uncertain taxonomic affinity. Among the extinct platyrrhines studied, only Cebupithecia is similar to large, extant platyrrhines in having large humeral (especially distal) joints. Our body mass estimates differ from each other and from published estimates based on teeth in ways that reflect known differences in relative sizes of the joints and teeth. We prefer body mass estimators that are biomechanically linked to weight-bearing, and especially those that are relatively insensitive to differences in locomotor style and phylogenetic history. Whenever possible, extant reference samples should be chosen to match target fossils in joint proportionality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Laboratory Studies of Carbon Emission from Biomass Burning for use in Remote Sensing
NASA Technical Reports Server (NTRS)
Wald, Andrew E.; Kaufman, Yoram J.
1998-01-01
Biomass burning is a significant source of many trace gases in the atmosphere. Up to 25% of the total anthropogenic carbon dioxide added to the atmosphere annually is from biomass burning. However, this gaseous emission from fires is not directly detectable from satellite. Infrared radiance from the fires is. In order to see if infrared radiance can be used as a tracer for these emitted gases, we made laboratory measurements to determine the correlation of emitted carbon dioxide, carbon monoxide and total burned biomass with emitted infrared radiance. If the measured correlations among these quantities hold in the field, then satellite-observed infrared radiance can be used to estimate gaseous emission and total burned biomass on a global, daily basis. To this end, several types of biomass fuels were burned under controlled conditions in a large-scale combustion laboratory. Simultaneous measurements of emitted spectral infrared radiance, emitted carbon dioxide, carbon monoxide, and total mass loss were made. In addition measurements of fuel moisture content and fuel elemental abundance were made. We found that for a given fire, the quantity of carbon burned can be estimated from 11 (micro)m radiance measurements only within a factor of five. This variation arises from three sources, 1) errors in our measurements, 2) the subpixel nature of the fires, and 3) inherent differences in combustion of different fuel types. Despite this large range, these measurements can still be used for large-scale satellite estimates of biomass burned. This is because of the very large possible spread of fire sizes that will be subpixel as seen by Moderate Resolution Imaging Spectroradiometer (MODIS). Due to this large spread, even relatively low-precision correlations can still be useful for large-scale estimates of emitted carbon. Furthermore, such estimates using the MODIS 3.9 (micro)m channel should be even more accurate than our estimates based on 11 (micro)m radiance.
Bouwman, Aniek C; Hayes, Ben J; Calus, Mario P L
2017-10-30
Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of allele counts results in less shrinkage towards the mean for low minor allele frequency (MAF) variants. Scaling may become relevant for estimating ASE as more low MAF variants will be used in genomic evaluations. We show the impact of scaling on estimates of ASE using real data and a theoretical framework, and in terms of power, model fit and predictive performance. In a dairy cattle dataset with 630 K SNP genotypes, the correlation between DGV for stature from a random regression model using centered allele counts (RRc) and centered and scaled allele counts (RRcs) was 0.9988, whereas the overall correlation between ASE using RRc and RRcs was 0.27. The main difference in ASE between both methods was found for SNPs with a MAF lower than 0.01. Both the ratio (ASE from RRcs/ASE from RRc) and the regression coefficient (regression of ASE from RRcs on ASE from RRc) were much higher than 1 for low MAF SNPs. Derived equations showed that scenarios with a high heritability, a large number of individuals and a small number of variants have lower ratios between ASE from RRc and RRcs. We also investigated the optimal scaling parameter [from - 1 (RRcs) to 0 (RRc) in steps of 0.1] in the bovine stature dataset. We found that the log-likelihood was maximized with a scaling parameter of - 0.8, while the mean squared error of prediction was minimized with a scaling parameter of - 1, i.e., RRcs. Large differences in estimated ASE were observed for low MAF SNPs when allele counts were scaled or not scaled because there is less shrinkage towards the mean for scaled allele counts. We derived a theoretical framework that shows that the difference in ASE due to shrinkage is heavily influenced by the power of the data. Increasing the power results in smaller differences in ASE whether allele counts are scaled or not.
Scaling an in situ network for high resolution modeling during SMAPVEX15
USDA-ARS?s Scientific Manuscript database
Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in si...
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
2016-09-28
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2015-05-01
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Retrieving cosmological signal using cosmic flows
NASA Astrophysics Data System (ADS)
Bouillot, V.; Alimi, J.-M.
2011-12-01
To understand the origin of the anomalously high bulk flow at large scales, we use very large simulations in various cosmological models. To disentangle between cosmological and environmental effects, we select samples with bulk flow profiles similar to the observational data Watkins et al. (2009) which exhibit a maximum in the bulk flow at 53 h^{-1} Mpc. The estimation of the cosmological parameters Ω_M and σ_8, done on those samples, is correct from the rms mass fluctuation whereas this estimation gives completely false values when done on bulk flow measurements, hence showing a dependance of velocity fields on larger scales. By drawing a clear link between velocity fields at 53 h^{-1} Mpc and asymmetric patterns of the density field at 85 h^{-1} Mpc, we show that the bulk flow can depend largely on the environment. The retrieving of the cosmological signal is achieved by studying the convergence of the bulk flow towards the linear prediction at very large scale (˜ 150 h^{-1} Mpc).
Katrin Premke; Katrin Attermeyer; Jurgen Augustin; Alvaro Cabezas; Peter Casper; Detlef Deumlich; Jorg Gelbrecht; Horst H. Gerke; Arthur Gessler; Hans-Peter Grossart; Sabine Hilt; Michael Hupfer; Thomas Kalettka; Zachary Kayler; Gunnar Lischeid; Michael Sommer; Dominik Zak
2016-01-01
Landscapes can be viewed as spatially heterogeneous areas encompassing terrestrial and aquatic domains. To date, most landscape carbon (C) fluxes have been estimated by accounting for terrestrial ecosystems, while aquatic ecosystems have been largely neglected. However, a robust assessment of C fluxes on the landscape scale requires the estimation of fluxes within and...
Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment
ERIC Educational Resources Information Center
Cui, Ying; Mousavi, Amin
2015-01-01
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
ERIC Educational Resources Information Center
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.
2017-01-01
Competence data from low-stakes educational large-scale assessment studies allow for evaluating relationships between competencies and other variables. The impact of item-level nonresponse has not been investigated with regard to statistics that determine the size of these relationships (e.g., correlations, regression coefficients). Classical…
Effects of Design Properties on Parameter Estimation in Large-Scale Assessments
ERIC Educational Resources Information Center
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas
2015-01-01
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
Online estimation of the wavefront outer scale profile from adaptive optics telemetry
NASA Astrophysics Data System (ADS)
Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.
2017-02-01
We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.
Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos
NASA Technical Reports Server (NTRS)
Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.
1994-01-01
Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
2018-01-01
We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129
Estimating carbon fluxes on small rotationally grazed pastures
USDA-ARS?s Scientific Manuscript database
Satellite-based Normalized Difference Vegetation Index (NDVI) data have been extensively used for estimating gross primary productivity (GPP) and yield of grazing lands throughout the world. Large-scale estimates of GPP are a necessary component of efforts to monitor the soil carbon balance of grazi...
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boun...
Measuring coral reef decline through meta-analyses
Côté, I.M; Gill, J.A; Gardner, T.A; Watkinson, A.R
2005-01-01
Coral reef ecosystems are in decline worldwide, owing to a variety of anthropogenic and natural causes. One of the most obvious signals of reef degradation is a reduction in live coral cover. Past and current rates of loss of coral are known for many individual reefs; however, until recently, no large-scale estimate was available. In this paper, we show how meta-analysis can be used to integrate existing small-scale estimates of change in coral and macroalgal cover, derived from in situ surveys of reefs, to generate a robust assessment of long-term patterns of large-scale ecological change. Using a large dataset from Caribbean reefs, we examine the possible biases inherent in meta-analytical studies and the sensitivity of the method to patchiness in data availability. Despite the fact that our meta-analysis included studies that used a variety of sampling methods, the regional estimate of change in coral cover we obtained is similar to that generated by a standardized survey programme that was implemented in 1991 in the Caribbean. We argue that for habitat types that are regularly and reasonably well surveyed in the course of ecological or conservation research, meta-analysis offers a cost-effective and rapid method for generating robust estimates of past and current states. PMID:15814352
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Horvitz-Thompson survey sample methods for estimating large-scale animal abundance
Samuel, M.D.; Garton, E.O.
1994-01-01
Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey objectives and optimize decisions related to survey bias and variance. Finally, managers and researchers involved in the survey design process must realize that obtaining the best survey results requires an interactive and recursive process of survey design, execution, analysis and redesign. Survey refinements will be possible as further knowledge is gained on the actual abundance and distribution of the population and on the most efficient techniques for detection animals.
NASA Astrophysics Data System (ADS)
Kovanen, Dori J.; Slaymaker, Olav
2008-07-01
Active debris flow fans in the North Cascade Foothills of Washington State constitute a natural hazard of importance to land managers, private property owners and personal security. In the absence of measurements of the sediment fluxes involved in debris flow events, a morphological-evolutionary systems approach, emphasizing stratigraphy, dating, fan morphology and debris flow basin morphometry, was used. Using the stratigraphic framework and 47 radiocarbon dates, frequency of occurrence and relative magnitudes of debris flow events have been estimated for three spatial scales of debris flow systems: the within-fan site scale (84 observations); the fan meso-scale (six observations) and the lumped fan, regional or macro-scale (one fan average and adjacent lake sediments). In order to characterize the morphometric framework, plots of basin area v. fan area, basin area v. fan gradient and the Melton ruggedness number v. fan gradient for the 12 debris flow basins were compared with those documented for semi-arid and paraglacial fans. Basin area to fan area ratios were generally consistent with the estimated level of debris flow activity during the Holocene as reported below. Terrain analysis of three of the most active debris flow basins revealed the variety of modes of slope failure and sediment production in the region. Micro-scale debris flow event systems indicated a range of recurrence intervals for large debris flows from 106-3645 years. The spatial variation of these rates across the fans was generally consistent with previously mapped hazard zones. At the fan meso-scale, the range of recurrence intervals for large debris flows was 273-1566 years and at the regional scale, the estimated recurrence interval of large debris flows was 874 years (with undetermined error bands) during the past 7290 years. Dated lake sediments from the adjacent Lake Whatcom gave recurrence intervals for large sediment producing events ranging from 481-557 years over the past 3900 years and clearly discernible sedimentation events in the lacustrine sediments had a recurrence interval of 67-78 years over that same period.
Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.
Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis
2015-04-01
Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Miller, Lee M; Kleidon, Axel
2016-11-29
Wind turbines generate electricity by removing kinetic energy from the atmosphere. Large numbers of wind turbines are likely to reduce wind speeds, which lowers estimates of electricity generation from what would be presumed from unaffected conditions. Here, we test how well wind power limits that account for this effect can be estimated without explicitly simulating atmospheric dynamics. We first use simulations with an atmospheric general circulation model (GCM) that explicitly simulates the effects of wind turbines to derive wind power limits (GCM estimate), and compare them to a simple approach derived from the climatological conditions without turbines [vertical kinetic energy (VKE) estimate]. On land, we find strong agreement between the VKE and GCM estimates with respect to electricity generation rates (0.32 and 0.37 W e m -2 ) and wind speed reductions by 42 and 44%. Over ocean, the GCM estimate is about twice the VKE estimate (0.59 and 0.29 W e m -2 ) and yet with comparable wind speed reductions (50 and 42%). We then show that this bias can be corrected by modifying the downward momentum flux to the surface. Thus, large-scale limits to wind power use can be derived from climatological conditions without explicitly simulating atmospheric dynamics. Consistent with the GCM simulations, the approach estimates that only comparatively few land areas are suitable to generate more than 1 W e m -2 of electricity and that larger deployment scales are likely to reduce the expected electricity generation rate of each turbine. We conclude that these atmospheric effects are relevant for planning the future expansion of wind power.
Miller, Lee M.; Kleidon, Axel
2016-01-01
Wind turbines generate electricity by removing kinetic energy from the atmosphere. Large numbers of wind turbines are likely to reduce wind speeds, which lowers estimates of electricity generation from what would be presumed from unaffected conditions. Here, we test how well wind power limits that account for this effect can be estimated without explicitly simulating atmospheric dynamics. We first use simulations with an atmospheric general circulation model (GCM) that explicitly simulates the effects of wind turbines to derive wind power limits (GCM estimate), and compare them to a simple approach derived from the climatological conditions without turbines [vertical kinetic energy (VKE) estimate]. On land, we find strong agreement between the VKE and GCM estimates with respect to electricity generation rates (0.32 and 0.37 We m−2) and wind speed reductions by 42 and 44%. Over ocean, the GCM estimate is about twice the VKE estimate (0.59 and 0.29 We m−2) and yet with comparable wind speed reductions (50 and 42%). We then show that this bias can be corrected by modifying the downward momentum flux to the surface. Thus, large-scale limits to wind power use can be derived from climatological conditions without explicitly simulating atmospheric dynamics. Consistent with the GCM simulations, the approach estimates that only comparatively few land areas are suitable to generate more than 1 We m−2 of electricity and that larger deployment scales are likely to reduce the expected electricity generation rate of each turbine. We conclude that these atmospheric effects are relevant for planning the future expansion of wind power. PMID:27849587
Coefficient Alpha and Reliability of Scale Scores
ERIC Educational Resources Information Center
Almehrizi, Rashid S.
2013-01-01
The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…
Cognitive Performance Decrement in U.S. Army Aircrews.
1985-08-31
through his technical insight, patience and understanding of the challenges A associated with large- scale data collection. Inputs from members of... SCALES FOR HELICOPTER TASK TAXONOMY -1-------133 F LITERATURE REVIEW ON TIME ESTIMATION -------- --- 137 F.1 PURPOSE ----------------------- 137 F.2...The Glickman study indi- cates that the time estimation methodology employed by them did a minimal job of discriminating tasks. However, the current
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
A model of forest floor carbon mass for United States forest types
James E. Smith; Linda S. Heath
2002-01-01
Includes a large set of published values of forest floor mass and develop large-scale estimates of carbon mass according to region and forest type. Estimates of average forest floor carbon mass per hectare of forest applied to a 1997 summary forest inventory, sum to 4.5 Gt carbon stored in forests of the 48 contiguous United States.
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.
A Spatial Method to Calculate Small-Scale Fisheries Extent
NASA Astrophysics Data System (ADS)
Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.
2016-02-01
Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
Plant biomarkers in aerosols record isotopic discrimination of terrestrial photosynthesis.
Conte, Maureen H; Weber, John C
2002-06-06
Carbon uptake by the oceans and by the terrestrial biosphere can be partitioned using changes in the (12)C/(13)C isotopic ratio (delta(13)C) of atmospheric carbon dioxide, because terrestrial photosynthesis strongly discriminates against (13)CO(2), whereas ocean uptake does not. This approach depends on accurate estimates of the carbon isotopic discrimination of terrestrial photosynthesis (Delta; ref. 5) at large regional scales, yet terrestrial ecosystem heterogeneity makes such estimates problematic. Here we show that ablated plant wax compounds in continental air masses can be used to estimate Delta over large spatial scales and at less than monthly temporal resolution. We measured plant waxes in continental air masses advected to Bermuda, which are mainly of North American origin, and used the wax isotopic composition to estimate Delta simply. Our estimates indicate a large (5 6 per thousand) seasonal variation in Delta of the temperate North American biosphere, with maximum discrimination occurring in late spring, coincident with the onset of production. We suggest that the observed seasonality arises from several factors, including seasonal shifts in the proportions of production by C(3) and C(4) plants, and environmentally controlled adjustments in the photosynthetic discrimination of C(3)-plant-dominated ecosystems.
Mapping canopy gap fraction and leaf area index at continent-scale from satellite lidar
NASA Astrophysics Data System (ADS)
Mahoney, C.; Hopkinson, C.; Held, A. A.
2015-12-01
Information on canopy cover is essential for understanding spatial and temporal variability in vegetation biomass, local meteorological processes and hydrological transfers within vegetated environments. Gap fraction (GF), an index of canopy cover, is often derived over large areas (100's km2) via airborne laser scanning (ALS), estimates of which are reasonably well understood. However, obtaining country-wide estimates is challenging due to the lack of spatially distributed point cloud data. The Geoscience Laser Altimeter System (GLAS) removes spatial limitations, however, its large footprint nature and continuous waveform data measurements make derivations of GF challenging. ALS data from 3 Australian sites are used as a basis to scale-up GF estimates to GLAS footprint data by the use of a physically-based Weibull function. Spaceborne estimates of GF are employed in conjunction with supplementary predictor variables in the predictive Random Forest algorithm to yield country-wide estimates at a 250 m spatial resolution; country-wide estimates are accompanied with uncertainties at the pixel level. Preliminary estimates of effective Leaf Area Index (eLAI) are also presented by converting GF via the Beer-Lambert law, where an extinction coefficient of 0.5 is employed; deemed acceptable at such spatial scales. The need for such wide-scale quantification of GF and eLAI are key in the assessment and modification of current forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network (TERN), a key asset to policy makers with regards to the management of the national ecosystem, in fulfilling their government issued mandates.
A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes
2013-01-01
Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680
NASA Astrophysics Data System (ADS)
Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.
2015-11-01
Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
Lifetime evaluation of large format CMOS mixed signal infrared devices
NASA Astrophysics Data System (ADS)
Linder, A.; Glines, Eddie
2015-09-01
New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Harrington, Rebecca M.; Kwiatek, Grzegorz; Moran, Seth C.
2015-01-01
We analyze a group of 6073 low-frequency earthquakes recorded during a week-long temporary deployment of broadband seismometers at distances of less than 3 km from the crater at Mount St. Helens in September of 2006. We estimate the seismic moment (M0) and spectral corner frequency (f0) using a spectral ratio approach for events with a high signal-to-noise (SNR) ratio that have a cross-correlation coefficient of 0.8 or greater with at least five other events. A cluster analysis of cross-correlation values indicates that the group of 421 events meeting the SNR and cross-correlation criteria forms eight event families that exhibit largely self-similar scaling. We estimate the M0 and f0 values of the 421 events and calculate their static stress drop and scaled energy (ER/M0) values. The estimated values suggest self-similar scaling within families, as well as between five of eight families (i.e., and constant). We speculate that differences in scaled energy values for the two families with variable scaling may result from a lack of resolution in the velocity model. The observation of self-similar scaling is the first of its kind for such a large group of low-frequency volcanic tectonic events occurring during a single active dome extrusion eruption.
Susanne Winter; Andreas Böck; Ronald E. McRoberts
2012-01-01
Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...
Controls on carbon consumption during Alaskan wildland fires
Eric S. Kasischke; Elizabeth E. Hoy
2012-01-01
A method was developed to estimate carbon consumed during wildland fires in interior Alaska based on medium-spatial scale data (60 m cell size) generated on a daily basis. Carbon consumption estimates were developed for 41 fire events in the large fire year of 2004 and 34 fire events from the small fire years of 2006-2008. Total carbon consumed during the large fire...
Microfilament-Eruption Mechanism for Solar Spicules
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, Ronald L.
2017-01-01
Recent studies indicate that solar coronal jets result from eruption of small-scale filaments, or "minifilaments" (Sterling et al. 2015, Nature, 523, 437; Panesar et al. ApJL, 832L, 7). In many aspects, these coronal jets appear to be small-scale versions of long-recognized large-scale solar eruptions that are often accompanied by eruption of a large-scale filament and that produce solar flares and coronal mass ejections (CMEs). In coronal jets, a jet-base bright point (JBP) that is often observed to accompany the jet and that sits on the magnetic neutral line from which the minifilament erupts, corresponds to the solar flare of larger-scale eruptions that occurs at the neutral line from which the large-scale filament erupts. Large-scale eruptions are relatively uncommon (approximately 1 per day) and occur with relatively large-scale erupting filaments (approximately 10 (sup 5) kilometers long). Coronal jets are more common (approximately 100s per day), but occur from erupting minifilaments of smaller size (approximately 10 (sup 4) kilometers long). It is known that solar spicules are much more frequent (many millions per day) than coronal jets. Just as coronal jets are small-scale versions of large-scale eruptions, here we suggest that solar spicules might in turn be small-scale versions of coronal jets; we postulate that the spicules are produced by eruptions of "microfilaments" of length comparable to the width of observed spicules (approximately 300 kilometers). A plot of the estimated number of the three respective phenomena (flares/CMEs, coronal jets, and spicules) occurring on the Sun at a given time, against the average sizes of erupting filaments, minifilaments, and the putative microfilaments, results in a size distribution that can be fitted with a power-law within the estimated uncertainties. The counterparts of the flares of large-scale eruptions and the JBPs of jets might be weak, pervasive, transient brightenings observed in Hinode/CaII images, and the production of spicules by microfilament eruptions might explain why spicules spin, as do coronal jets. The expected small-scale neutral lines from which the microfilaments would be expected to erupt would be difficult to detect reliably with current instrumentation, but might be apparent with instrumentation of the near future. A full report on this work appears in Sterling and Moore 2016, ApJL, 829, L9.
NASA Astrophysics Data System (ADS)
Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.
2017-12-01
Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of the RPE is largely explained by the interaction between belowground plant C allocation and SOC depth distribution. Our findings thus highlight the importance of fine scale interactions between plant and soil properties for large scale carbon fluxes and we provide a first model that bridges this gap and permits the quantification of RPE across a large area.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
An evaluation of sex-age-kill (SAK) model performance
Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent
2009-01-01
The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.
How much a galaxy knows about its large-scale environment?: An information theoretic perspective
NASA Astrophysics Data System (ADS)
Pandey, Biswajit; Sarkar, Suman
2017-05-01
The small-scale environment characterized by the local density is known to play a crucial role in deciding the galaxy properties but the role of large-scale environment on galaxy formation and evolution still remain a less clear issue. We propose an information theoretic framework to investigate the influence of large-scale environment on galaxy properties and apply it to the data from the Galaxy Zoo project that provides the visual morphological classifications of ˜1 million galaxies from the Sloan Digital Sky Survey. We find a non-zero mutual information between morphology and environment that decreases with increasing length-scales but persists throughout the entire length-scales probed. We estimate the conditional mutual information and the interaction information between morphology and environment by conditioning the environment on different length-scales and find a synergic interaction between them that operates up to at least a length-scales of ˜30 h-1 Mpc. Our analysis indicates that these interactions largely arise due to the mutual information shared between the environments on different length-scales.
Haffenden, Angela M; Goodale, Melvyn A
2002-12-01
Previous findings have suggested that visuomotor programming can make use of learned size information in experimental paradigms where movement kinematics are quite consistent from trial to trial. The present experiment was designed to test whether or not this conclusion could be generalized to a different manipulation of kinematic variability. As in previous work, an association was established between the size and colour of square blocks (e.g. red = large; yellow = small, or vice versa). Associating size and colour in this fashion has been shown to reliably alter the perceived size of two test blocks halfway in size between the large and small blocks: estimations of the test block matched in colour to the group of large blocks are smaller than estimations of the test block matched to the group of small blocks. Subjects grasped the blocks, and on other trials estimated the size of the blocks. These changes in perceived block size were incorporated into grip scaling only when movement kinematics were highly consistent from trial to trial; that is, when the blocks were presented in the same location on each trial. When the blocks were presented in different locations grip scaling remained true to the metrics of the test blocks despite the changes in perceptual estimates of block size. These results support previous findings suggesting that kinematic consistency facilitates the incorporation of learned perceptual information into grip scaling.
A large-scale, long-term study of scale drift: The micro view and the macro view
NASA Astrophysics Data System (ADS)
He, W.; Li, S.; Kingsbury, G. G.
2016-11-01
The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Urbazaev, Mikhail; Thiel, Christian; Cremer, Felix; Dubayah, Ralph; Migliavacca, Mirco; Reichstein, Markus; Schmullius, Christiane
2018-02-21
Information on the spatial distribution of aboveground biomass (AGB) over large areas is needed for understanding and managing processes involved in the carbon cycle and supporting international policies for climate change mitigation and adaption. Furthermore, these products provide important baseline data for the development of sustainable management strategies to local stakeholders. The use of remote sensing data can provide spatially explicit information of AGB from local to global scales. In this study, we mapped national Mexican forest AGB using satellite remote sensing data and a machine learning approach. We modelled AGB using two scenarios: (1) extensive national forest inventory (NFI), and (2) airborne Light Detection and Ranging (LiDAR) as reference data. Finally, we propagated uncertainties from field measurements to LiDAR-derived AGB and to the national wall-to-wall forest AGB map. The estimated AGB maps (NFI- and LiDAR-calibrated) showed similar goodness-of-fit statistics (R 2 , Root Mean Square Error (RMSE)) at three different scales compared to the independent validation data set. We observed different spatial patterns of AGB in tropical dense forests, where no or limited number of NFI data were available, with higher AGB values in the LiDAR-calibrated map. We estimated much higher uncertainties in the AGB maps based on two-stage up-scaling method (i.e., from field measurements to LiDAR and from LiDAR-based estimates to satellite imagery) compared to the traditional field to satellite up-scaling. By removing LiDAR-based AGB pixels with high uncertainties, it was possible to estimate national forest AGB with similar uncertainties as calibrated with NFI data only. Since LiDAR data can be acquired much faster and for much larger areas compared to field inventory data, LiDAR is attractive for repetitive large scale AGB mapping. In this study, we showed that two-stage up-scaling methods for AGB estimation over large areas need to be analyzed and validated with great care. The uncertainties in the LiDAR-estimated AGB propagate further in the wall-to-wall map and can be up to 150%. Thus, when a two-stage up-scaling method is applied, it is crucial to characterize the uncertainties at all stages in order to generate robust results. Considering the findings mentioned above LiDAR can be used as an extension to NFI for example for areas that are difficult or not possible to access.
Estimation of shoreline position and change using airborne topographic lidar data
Stockdon, H.F.; Sallenger, A.H.; List, J.H.; Holman, R.A.
2002-01-01
A method has been developed for estimating shoreline position from airborne scanning laser data. This technique allows rapid estimation of objective, GPS-based shoreline positions over hundreds of kilometers of coast, essential for the assessment of large-scale coastal behavior. Shoreline position, defined as the cross-shore position of a vertical shoreline datum, is found by fitting a function to cross-shore profiles of laser altimetry data located in a vertical range around the datum and then evaluating the function at the specified datum. Error bars on horizontal position are directly calculated as the 95% confidence interval on the mean value based on the Student's t distribution of the errors of the regression. The technique was tested using lidar data collected with NASA's Airborne Topographic Mapper (ATM) in September 1997 on the Outer Banks of North Carolina. Estimated lidar-based shoreline position was compared to shoreline position as measured by a ground-based GPS vehicle survey system. The two methods agreed closely with a root mean square difference of 2.9 m. The mean 95% confidence interval for shoreline position was ?? 1.4 m. The technique has been applied to a study of shoreline change on Assateague Island, Maryland/Virginia, where three ATM data sets were used to assess the statistics of large-scale shoreline change caused by a major 'northeaster' winter storm. The accuracy of both the lidar system and the technique described provides measures of shoreline position and change that are ideal for studying storm-scale variability over large spatial scales.
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience
NASA Technical Reports Server (NTRS)
Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander
2018-01-01
This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.
Fan-out Estimation in Spin-based Quantum Computer Scale-up.
Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R
2017-10-17
Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.
Shear strength of clay and silt embankments.
DOT National Transportation Integrated Search
2009-09-01
Highway embankment is one of the most common large-scale geotechnical facilities constructed in Ohio. In the past, the design of these embankments was largely based on soil shear strength properties that had been estimated from previously published e...
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
Progress and limitations on quantifying nutrient and carbon loading to coastal waters
NASA Astrophysics Data System (ADS)
Stets, E.; Oelsner, G. P.; Stackpoole, S. M.
2017-12-01
Riverine export of nutrients and carbon to estuarine and coastal waters are important determinants of coastal ecosystem health and provide necessary insight into global biogeochemical cycles. Quantification of coastal solute loads typically relies upon modeling based on observations of concentration and discharge from selected rivers draining to the coast. Most large-scale river export models require unidirectional flow and thus are referenced to monitoring locations at the head of tide, which can be located far inland. As a result, the contributions of the coastal plain, tidal wetlands, and concentrated coastal development are often poorly represented in regional and continental-scale estimates of solute delivery to coastal waters. However, site-specific studies have found that these areas are disproportionately active in terms of nutrient and carbon export. Modeling efforts to upscale fluxes from these areas, while not common, also suggest an outsized importance to coastal flux estimates. This presentation will focus on illustrating how the problem of under-representation of near-shore environments impacts large-scale coastal flux estimates in the context of recent regional and continental-scale assessments. Alternate approaches to capturing the influence of the near-coastal terrestrial inputs including recent data aggregation efforts and modeling approaches will be discussed.
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
Daolan Zheng; Linda S. Heath; Mark J. Ducey
2008-01-01
We combined satellite (Landsat 7 and Moderate Resolution Imaging Spectrometer) and U.S. Department of Agriculture forest inventory and analysis (FIA) data to estimate forest aboveground biomass (AGB) across New England, USA. This is practical for large-scale carbon studies and may reduce uncertainty of AGB estimates. We estimate that total regional forest AGB was 1,867...
Estimating migratory game-bird productivity by integrating age ratio and banding data
Zimmerman, G.S.; Link, W.A.; Conroy, M.J.; Sauer, J.R.; Richkus, K.D.; Boomer, G. Scott
2010-01-01
Implications: Several national and international management strategies for migratory game birds in North America rely on measures of productivity from harvest survey parts collections, without a justification of the estimator or providing estimates of precision. We derive an estimator of productivity with realistic measures of uncertainty that can be directly incorporated into management plans or ecological studies across large spatial scales.
Stream Flow Prediction by Remote Sensing and Genetic Programming
NASA Technical Reports Server (NTRS)
Chang, Ni-Bin
2009-01-01
A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.
ERIC Educational Resources Information Center
Andrich, David; Marais, Ida; Humphry, Stephen Mark
2016-01-01
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Modeling grain-size dependent bias in estimating forest area: a regional application
Daolan Zheng; Linda S. Heath; Mark J. Ducey
2008-01-01
A better understanding of scaling-up effects on estimating important landscape characteristics (e.g. forest percentage) is critical for improving ecological applications over large areas. This study illustrated effects of changing grain sizes on regional forest estimates in Minnesota, Wisconsin, and Michigan of the USA using 30-m land-cover maps (1992 and 2001)...
NASA Astrophysics Data System (ADS)
Ray, R. K.; Syed, T. H.; Saha, Dipankar; Sarkar, B. C.; Patre, A. K.
2017-12-01
Extracted groundwater, 90% of which is used for irrigated agriculture, is central to the socio-economic development of India. A lack of regulation or implementation of regulations, alongside unrecorded extraction, often leads to over exploitation of large-scale common-pool resources like groundwater. Inevitably, management of groundwater extraction (draft) for irrigation is critical for sustainability of aquifers and the society at large. However, existing assessments of groundwater draft, which are mostly available at large spatial scales, are inadequate for managing groundwater resources that are primarily exploited by stakeholders at much finer scales. This study presents an estimate, projection and analysis of fine-scale groundwater draft in the Seonath-Kharun interfluve of central India. Using field surveys of instantaneous discharge from irrigation wells and boreholes, annual groundwater draft for irrigation in this area is estimated to be 212 × 106 m3, most of which (89%) is withdrawn during non-monsoon season. However, the density of wells/boreholes, and consequent extraction of groundwater, is controlled by the existing hydrogeological conditions. Based on trends in the number of abstraction structures (1982-2011), groundwater draft for the year 2020 is projected to be approximately 307 × 106 m3; hence, groundwater draft for irrigation in the study area is predicted to increase by ˜44% within a span of 8 years. Central to the work presented here is the approach for estimation and prediction of groundwater draft at finer scales, which can be extended to critical groundwater zones of the country.
Lim, Chun Yi; Law, Mary; Khetani, Mary; Rosenbaum, Peter; Pollock, Nancy
2018-08-01
To estimate the psychometric properties of a culturally adapted version of the Young Children's Participation and Environment Measure (YC-PEM) for use among Singaporean families. This is a prospective cohort study. Caregivers of 151 Singaporean children with (n = 83) and without (n = 68) developmental disabilities, between 0 and 7 years, completed the YC-PEM (Singapore) questionnaire with 3 participation scales (frequency, involvement, and change desired) and 1 environment scale for three settings: home, childcare/preschool, and community. Setting-specific estimates of internal consistency, test-retest reliability, and construct validity were obtained. Internal consistency estimates varied from .59 to .92 for the participation scales and .73 to .79 for the environment scale. Test-retest reliability estimates from the YC-PEM conducted on two occasions, 2-3 weeks apart, varied from .39 to .89 for the participation scales and from .65 to .80 for the environment scale. Moderate to large differences were found in participation and perceived environmental support between children with and without a disability. YC-PEM (Singapore) scales have adequate psychometric properties except for low internal consistency for the childcare/preschool participation frequency scale and low test-retest reliability for home participation frequency scale. The YC-PEM (Singapore) may be used for population-level studies involving young children with and without developmental disabilities.
Poland, Jesse A; Nelson, Rebecca J
2011-02-01
The agronomic importance of developing durably resistant cultivars has led to substantial research in the field of quantitative disease resistance (QDR) and, in particular, mapping quantitative trait loci (QTL) for disease resistance. The assessment of QDR is typically conducted by visual estimation of disease severity, which raises concern over the accuracy and precision of visual estimates. Although previous studies have examined the factors affecting the accuracy and precision of visual disease assessment in relation to the true value of disease severity, the impact of this variability on the identification of disease resistance QTL has not been assessed. In this study, the effects of rater variability and rating scales on mapping QTL for northern leaf blight resistance in maize were evaluated in a recombinant inbred line population grown under field conditions. The population of 191 lines was evaluated by 22 different raters using a direct percentage estimate, a 0-to-9 ordinal rating scale, or both. It was found that more experienced raters had higher precision and that using a direct percentage estimation of diseased leaf area produced higher precision than using an ordinal scale. QTL mapping was then conducted using the disease estimates from each rater using stepwise general linear model selection (GLM) and inclusive composite interval mapping (ICIM). For GLM, the same QTL were largely found across raters, though some QTL were only identified by a subset of raters. The magnitudes of estimated allele effects at identified QTL varied drastically, sometimes by as much as threefold. ICIM produced highly consistent results across raters and for the different rating scales in identifying the location of QTL. We conclude that, despite variability between raters, the identification of QTL was largely consistent among raters, particularly when using ICIM. However, care should be taken in estimating QTL allele effects, because this was highly variable and rater dependent.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
NASA Astrophysics Data System (ADS)
Giese, M.; Reimann, T.; Bailly-Comte, V.; Maréchal, J.-C.; Sauter, M.; Geyer, T.
2018-03-01
Due to the duality in terms of (1) the groundwater flow field and (2) the discharge conditions, flow patterns of karst aquifer systems are complex. Estimated aquifer parameters may differ by several orders of magnitude from local (borehole) to regional (catchment) scale because of the large contrast in hydraulic parameters between matrix and conduit, their heterogeneity and anisotropy. One approach to deal with the scale effect problem in the estimation of hydraulic parameters of karst aquifers is the application of large-scale experiments such as long-term high-abstraction conduit pumping tests, stimulating measurable groundwater drawdown in both, the karst conduit system as well as the fractured matrix. The numerical discrete conduit-continuum modeling approach MODFLOW-2005 Conduit Flow Process Mode 1 (CFPM1) is employed to simulate laminar and nonlaminar conduit flow, induced by large-scale experiments, in combination with Darcian matrix flow. Effects of large-scale experiments were simulated for idealized settings. Subsequently, diagnostic plots and analyses of different fluxes are applied to interpret differences in the simulated conduit drawdown and general flow patterns. The main focus is set on the question to which extent different conduit flow regimes will affect the drawdown in conduit and matrix depending on the hydraulic properties of the conduit system, i.e., conduit diameter and relative roughness. In this context, CFPM1 is applied to investigate the importance of considering turbulent conditions for the simulation of karst conduit flow. This work quantifies the relative error that results from assuming laminar conduit flow for the interpretation of a synthetic large-scale pumping test in karst.
NASA Astrophysics Data System (ADS)
Price, Aaron; Lee, H.
2010-01-01
Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.
Measures of large-scale structure in the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1991-01-01
Variations of the counts-in-cells with cell size are used here to define two statistical measures of large-scale clustering in three 6 deg slices of the CfA redshift survey. A percolation criterion is used to estimate the filling factor which measures the fraction of the total volume in the survey occupied by the large-scale structures. For the full 18 deg slice of the CfA redshift survey, f is about 0.25 + or - 0.05. After removing groups with more than five members from two of the slices, variations of the counts in occupied cells with cell size have a power-law behavior with a slope beta about 2.2 on scales from 1-10/h Mpc. Application of both this statistic and the percolation analysis to simulations suggests that a network of two-dimensional structures is a better description of the geometry of the clustering in the CfA slices than a network of one-dimensional structures. Counts-in-cells are also used to estimate at 0.3 galaxy h-squared/Mpc the average galaxy surface density in sheets like the Great Wall.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Cosmic string with a light massive neutrino
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Stebbins, Albert
1992-01-01
We have estimated the power spectra of density fluctuations produced by cosmic strings with neutrino hot dark matter (HDM). Normalizing at 8/h Mpc, we find that the spectrum has more power on small scales than HDM + inflation, less than cold dark matter (CDM) + inflation, and significantly less the CDM + strings. With HDM, large wakes give significant contribution to the power on the galaxy scale and may give rise to large sheets of galaxies.
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua
2017-12-01
Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Subramanian, Kandaswamy
2013-02-01
The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.
NASA Astrophysics Data System (ADS)
Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.
2015-12-01
The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
NASA Astrophysics Data System (ADS)
Tuttle, S. E.; Salvucci, G.
2012-12-01
Soil moisture influences many hydrological processes in the water and energy cycles, such as runoff generation, groundwater recharge, and evapotranspiration, and thus is important for climate modeling, water resources management, agriculture, and civil engineering. Large-scale estimates of soil moisture are produced almost exclusively from remote sensing, while validation of remotely sensed soil moisture has relied heavily on ground truthing, which is at an inherently smaller scale. Here we present a complementary method to determine the information content in different soil moisture products using only large-scale precipitation data (i.e. without modeling). This study builds on the work of Salvucci [2001], Saleem and Salvucci [2002], and Sun et al. [2011], in which precipitation was conditionally averaged according to soil moisture level, resulting in moisture-outflow curves that estimate the dependence of drainage, runoff, and evapotranspiration on soil moisture (i.e. sigmoidal relations that reflect stressed evapotranspiration for dry soils, roughly constant flux equal to potential evaporation minus capillary rise for moderately dry soils, and rapid drainage for very wet soils). We postulate that high quality satellite estimates of soil moisture, using large-scale precipitation data, will yield similar sigmoidal moisture-outflow curves to those that have been observed at field sites, while poor quality estimates will yield flatter, less informative curves that explain less of the precipitation variability. Following this logic, gridded ¼ degree NLDAS precipitation data were compared to three AMSR-E derived soil moisture products (VUA-NASA, or LPRM [Owe et al., 2001], NSIDC [Njoku et al., 2003], and NSIDC-LSP [Jones & Kimball, 2011]) for a period of nine years (2001-2010) across the contiguous United States. Gaps in the daily soil moisture data were filled using a multiple regression model reliant on past and future soil moisture and precipitation, and soil moisture was then converted to a ranked wetness index, in order to reconcile the wide range and magnitude of the soil moisture products. Generalized linear models were employed to fit a polynomial model to precipitation, given wetness index. Various measures of fit (e.g. log likelihood) were used to judge the amount of information in each soil moisture product, as indicated by the amount of precipitation variability explained by the fitted model. Using these methods, regional patterns appear in soil moisture product performance.
Joint Estimation of the Epoch of Reionization Power Spectrum and Foregrounds
NASA Astrophysics Data System (ADS)
Sims, Peter; Pober, Jonathan
2018-01-01
Bright astrophysical foregrounds present a significant impediment to the detection of redshifted 21-cm emission from the Epoch of Reionization on large spatial scales. In this talk I present a framework for the joint modeling of the power spectral contamination by astrophysical foregrounds and the power spectrum of the Epoch of Reionization. I show how informative priors on the power spectral contamination by astrophysical foregrounds at high redshifts, where emission from both the Epoch of Reionization and its foregrounds is present in the data, can be obtained through analysis of foreground-only emission at lower redshifts. Finally, I demonstrate how, by using such informative foreground priors, joint modeling can be employed to mitigate bias in estimates of the power spectrum of the Epoch of Reionization signal and, in particular, to enable recovery of more robust power spectral estimates on large spatial scales.
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
A new energy transfer model for turbulent free shear flow
NASA Technical Reports Server (NTRS)
Liou, William W.-W.
1992-01-01
A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.
REVIEWS OF TOPICAL PROBLEMS: The large-scale structure of the universe
NASA Astrophysics Data System (ADS)
Shandarin, S. F.; Doroshkevich, A. G.; Zel'dovich, Ya B.
1983-01-01
A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe—a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a lowpressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.
M. Zachariah Peery; Benjamin H. Becker; Steven R. Beissinger
2007-01-01
The ratio of hatch-year (HY) to after-hatch-year (AHY) individuals (HY:AHY ratio) can be a valuable metric for estimating avian productivity because it does not require monitoring individual breeding sites and can often be estimated across large geographic and temporal scales. However, rigorous estimation of age ratios requires that both young and adult age classes are...
ESTIMATING REGIONAL SPECIES RICHNESS USING A LIMITED NUMBER OF SURVEY UNITS
The accurate and precise estimation of species richness at large spatial scales using a limited number of survey units is of great significance for ecology and biodiversity conservation. We used the distribution data of native fish and resident breeding bird species compiled for ...
Sensitivity of CEAP cropland simulations to the parameterization of the APEX model
USDA-ARS?s Scientific Manuscript database
For large scale applications like the U.S. National Scale Conservation Effects Assessment Project (CEAP), soil hydraulic characteristics data are not readily available and therefore need to be estimated. Field soil water properties are commonly approximated using laboratory soil water retention meas...
Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki
2018-01-01
There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS
Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.
2014-01-01
Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671
Probing Inflation Using Galaxy Clustering On Ultra-Large Scales
NASA Astrophysics Data System (ADS)
Dalal, Roohi; de Putter, Roland; Dore, Olivier
2018-01-01
A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.
Sakurai, Hidehiro; Masukawa, Hajime; Kitashima, Masaharu; Inoue, Kazuhito
2010-01-01
In order to decrease CO(2) emissions from the burning of fossil fuels, the development of new renewable energy sources sufficiently large in quantity is essential. To meet this need, we propose large-scale H(2) production on the sea surface utilizing cyanobacteria. Although many of the relevant technologies are in the early stage of development, this chapter briefly examines the feasibility of such H(2) production, in order to illustrate that under certain conditions large-scale photobiological H(2) production can be viable. Assuming that solar energy is converted to H(2) at 1.2% efficiency, the future cost of H(2) can be estimated to be about 11 (pipelines) and 26.4 (compression and marine transportation) cents kWh(-1), respectively.
Monitoring survival rates of Swainson's Thrush Catharus ustulatus at multiple spatial scales
Rosenberg, D.K.; DeSante, D.F.; McKelvey, K.S.; Hines, J.E.
1999-01-01
We estimated survival rates of Swainson's Thrush, a common, neotropical, migratory landbird, at multiple spatial scales, using data collected in the western USA from the Monitoring Avian Productivity and Survivorship Programme. We evaluated statistical power to detect spatially heterogeneous survival rates and exponentially declining survival rates among spatial scales with simulated populations parameterized from results of the Swainson's Thrush analyses. Models describing survival rates as constant across large spatial scales did not fit the data. The model we chose as most appropriate to describe survival rates of Swainson's Thrush allowed survival rates to vary among Physiographic Provinces, included a separate parameter for the probability that a newly captured bird is a resident individual in the study population, and constrained capture probability to be constant across all stations. Estimated annual survival rates under this model varied from 0.42 to 0.75 among Provinces. The coefficient of variation of survival estimates ranged from 5.8 to 20% among Physiographic Provinces. Statistical power to detect exponentially declining trends was fairly low for small spatial scales, although large annual declines (3% of previous year's rate) were likely to be detected when monitoring was conducted for long periods of time (e.g. 20 years). Although our simulations and field results are based on only four years of data from a limited number and distribution of stations, it is likely that they illustrate genuine difficulties inherent to broadscale efforts to monitor survival rates of territorial landbirds. In particular, our results suggest that more attention needs to be paid to sampling schemes of monitoring programmes, particularly regarding the trade-off between precision and potential bias of parameter estimates at varying spatial scales.
Monitoring survival rates of Swainson's Thrush Catharus ustulatus at multiple spatial scales
Rosenberg, D.K.; DeSante, D.F.; McKelvey, K.S.; Hines, J.E.
1999-01-01
We estimated survival rates of Swainson's Thrush, a common, neotropical, migratory landbird, at multiple spatial scales, using data collected in the western USA from the Monitoring Avian Productivity and Survivorship Programme. We evaluated statistical power to detect spatially heterogeneous survival rates and exponentially declining survival rates among spatial scales with simulated populations parameterized from results of the Swainson's Thrush analyses. Models describing survival rates as constant across large spatial scales did not fit the data. The model we chose as most appropriate to describe survival rates of Swainson's Thrush allowed survival rates to vary among Physiographic Provinces, included a separate parameter for the probability that a newly captured bird is a resident individual in the study population, and constrained capture probability to be constant across all stations. Estimated annual survival rates under this model varied from 0.42 to 0.75 among Provinces. The coefficient of variation of survival estimates ranged from 5.8 to 20% among Physiographic Provinces. Statistical power to detect exponentially declining trends was fairly low for small spatial scales, although large annual declines (3% of previous year's rate) were likely to be detected when monitoring was conducted for long periods of time (e.g. 20 years). Although our simulations and field results are based on only four years of date from a limited number and distribution of stations, it is likely that they illustrate genuine difficulties inherent to broadscale efforts to monitor survival rates of territorial landbirds. In particular, our results suggest that more attention needs to be paid to sampling schemes of monitoring programmes particularly regarding the trade-off between precison and potential bias o parameter estimates at varying spatial scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board the National Aeronautics and Space Administration's (NASA) Terra satellite to scale up AmeriFlux NEE measurements to themore » continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a modified regression tree approach. The predictive model was trained and validated using eddy flux NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE well (r = 0.73, p < 0.001). We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day interval in 2005 using spatially explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE as determined from measurements and the literature. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets over large areas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board NASA's Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS andmore » AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a regression tree approach. The predictive model was trained and validated using NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE reasonably well at the site level. We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day period in 2005 using spatially-explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets for large areas.« less
NASA Astrophysics Data System (ADS)
Moosdorf, N.; Langlotz, S. T.
2016-02-01
Submarine groundwater discharge (SGD) has been recognized as a relevant field of coastal research in the last years. Its implications on local scale have been documented by an increasing number of studies researching individual locations with SGD. The local studies also often emphasize its large variability. On the other end, global scale studies try to estimate SGD related fluxes of e.g. carbon (Cole et al., 2007) and nitrogen (Beusen et al., 2013). These studies naturally use a coarse resolution, too coarse to represent the aforementioned local variability of SGD (Moosdorf et al., 2015). A way to transfer information of the local variability of SGD to large scale flux estimates is needed. Here we discuss the upscaling of local studies based on the definition and typology of coastal catchments. Coastal catchments are those stretches of coast that do not drain into major rivers but directly into the sea. Their attributes, e.g. climate, topography, land cover, or lithology can be used to extrapolate from the local scale to larger scales. We present first results of a typology, compare coastal catchment attributes to SGD estimates from field studies and discuss upscaling as well as the associated uncertainties. This study aims at bridging the gap between the scales and enabling an improved representation of local scale variability on continental to global scale. With this, it can contribute to a recent initiative to model large scale SGD fluxes (NExT SGD). References: Beusen, A.H.W., Slomp, C.P., Bouwman, A.F., 2013. Global land-ocean linkage: direct inputs of nitrogen to coastal waters via submarine groundwater discharge. Environmental Research Letters, 8(3): 6. Cole, J.J., Prairie, Y.T., Caraco, N.F., McDowell, W.H., Tranvik, L.J., Striegl, R.G., Duarte, C.M., Kortelainen, P., Downing, J.A., Middelburg, J.J., Melack, J., 2007. Plumbing the global carbon cycle: Integrating inland waters into the terrestrial carbon budget. Ecosystems, 10(1): 171-184. Moosdorf, N., Stieglitz, T., Waska, H., Durr, H.H., Hartmann, J., 2015. Submarine groundwater discharge from tropical islands: a review. Grundwasser, 20(1): 53-67.
AdS and dS Entropy from String Junctions or The Function of Junction Conjunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, Eva M
Flux compactifications of string theory exhibiting the possibility of discretely tuning the cosmological constant to small values have been constructed. The highly tuned vacua in this discretuum have curvature radii which scale as large powers of the flux quantum numbers, exponential in the number of cycles in the compactification. By the arguments of Susskind/Witten (in the AdS case) and Gibbons/Hawking (in the dS case), we expect correspondingly large entropies associated with these vacua. If they are to provide a dual description of these vacua on their Coulomb branch, branes traded for the flux need to account for this entropy atmore » the appropriate energy scale. In this note, we argue that simple string junctions and webs ending on the branes can account for this large entropy, obtaining a rough estimate for junction entropy that agrees with the existing rough estimates for the spacing of the discretuum. In particular, the brane entropy can account for the (A)dS entropy far away from string scale correspondence limits.« less
Bybee, Paul J; Lee, Andrew H; Lamm, Ellen-Thérèse
2006-03-01
Allosaurus is one of the most common Mesozoic theropod dinosaurs. We present a histological analysis to assess its growth strategy and ontogenetic limb bone scaling. Based on an ontogenetic series of humeral, ulnar, femoral, and tibial sections of fibrolamellar bone, we estimate the ages of the largest individuals in the sample to be between 13-19 years. Growth curve reconstruction suggests that maximum growth occurred at 15 years, when body mass increased 148 kg/year. Based on larger bones of Allosaurus, we estimate an upper age limit of between 22-28 years of age, which is similar to preliminary data for other large theropods. Both Model I and Model II regression analyses suggest that relative to the length of the femur, the lengths of the humerus, ulna, and tibia increase in length more slowly than isometry predicts. That pattern of limb scaling in Allosaurus is similar to those in other large theropods such as the tyrannosaurids. Phylogenetic optimization suggests that large theropods independently evolved reduced humeral, ulnar, and tibial lengths by a phyletic reduction in longitudinal growth relative to the femur.
Estimation of Soil Moisture from Optical and Thermal Remote Sensing: A Review
Zhang, Dianjun; Zhou, Guoqing
2016-01-01
As an important parameter in recent and numerous environmental studies, soil moisture (SM) influences the exchange of water and energy at the interface between the land surface and atmosphere. Accurate estimate of the spatio-temporal variations of SM is critical for numerous large-scale terrestrial studies. Although microwave remote sensing provides many algorithms to obtain SM at large scale, such as SMOS and SMAP etc., resulting in many data products, they are almost low resolution and not applicable in small catchment or field scale. Estimations of SM from optical and thermal remote sensing have been studied for many years and significant progress has been made. In contrast to previous reviews, this paper presents a new, comprehensive and systematic review of using optical and thermal remote sensing for estimating SM. The physical basis and status of the estimation methods are analyzed and summarized in detail. The most important and latest advances in soil moisture estimation using temporal information have been shown in this paper. SM estimation from optical and thermal remote sensing mainly depends on the relationship between SM and the surface reflectance or vegetation index. The thermal infrared remote sensing methods uses the relationship between SM and the surface temperature or variations of surface temperature/vegetation index. These approaches often have complex derivation processes and many approximations. Therefore, combinations of optical and thermal infrared remotely sensed data can provide more valuable information for SM estimation. Moreover, the advantages and weaknesses of different approaches are compared and applicable conditions as well as key issues in current soil moisture estimation algorithms are discussed. Finally, key problems and suggested solutions are proposed for future research. PMID:27548168
Estimation of Soil Moisture from Optical and Thermal Remote Sensing: A Review.
Zhang, Dianjun; Zhou, Guoqing
2016-08-17
As an important parameter in recent and numerous environmental studies, soil moisture (SM) influences the exchange of water and energy at the interface between the land surface and atmosphere. Accurate estimate of the spatio-temporal variations of SM is critical for numerous large-scale terrestrial studies. Although microwave remote sensing provides many algorithms to obtain SM at large scale, such as SMOS and SMAP etc., resulting in many data products, they are almost low resolution and not applicable in small catchment or field scale. Estimations of SM from optical and thermal remote sensing have been studied for many years and significant progress has been made. In contrast to previous reviews, this paper presents a new, comprehensive and systematic review of using optical and thermal remote sensing for estimating SM. The physical basis and status of the estimation methods are analyzed and summarized in detail. The most important and latest advances in soil moisture estimation using temporal information have been shown in this paper. SM estimation from optical and thermal remote sensing mainly depends on the relationship between SM and the surface reflectance or vegetation index. The thermal infrared remote sensing methods uses the relationship between SM and the surface temperature or variations of surface temperature/vegetation index. These approaches often have complex derivation processes and many approximations. Therefore, combinations of optical and thermal infrared remotely sensed data can provide more valuable information for SM estimation. Moreover, the advantages and weaknesses of different approaches are compared and applicable conditions as well as key issues in current soil moisture estimation algorithms are discussed. Finally, key problems and suggested solutions are proposed for future research.
Mapping the universe in three dimensions
Haynes, Martha P.
1996-01-01
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
Mapping the universe in three dimensions.
Haynes, M P
1996-12-10
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fosalba, Pablo; Dore, Olivier
2007-11-15
Cross correlation between the cosmic microwave background (CMB) and large-scale structure is a powerful probe of dark energy and gravity on the largest physical scales. We introduce a novel estimator, the CMB-velocity correlation, that has most of its power on large scales and that, at low redshift, delivers up to a factor of 2 higher signal-to-noise ratio than the recently detected CMB-dark matter density correlation expected from the integrated Sachs-Wolfe effect. We propose to use a combination of peculiar velocities measured from supernovae type Ia and kinetic Sunyaev-Zeldovich cluster surveys to reveal this signal and forecast dark energy constraints thatmore » can be achieved with future surveys. We stress that low redshift peculiar velocity measurements should be exploited with complementary deeper large-scale structure surveys for precision cosmology.« less
Cosmic homogeneity: a spectroscopic and model-independent measurement
NASA Astrophysics Data System (ADS)
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
NASA Astrophysics Data System (ADS)
Rune Karlsen, Stein; Anderson, Helen B.; van der Wal, René; Bremset Hansen, Brage
2018-02-01
Efforts to estimate plant productivity using satellite data can be frustrated by the presence of cloud cover. We developed a new method to overcome this problem, focussing on the high-arctic archipelago of Svalbard where extensive cloud cover during the growing season can prevent plant productivity from being estimated over large areas. We used a field-based time-series (2000-2009) of live aboveground vascular plant biomass data and a recently processed cloud-free MODIS-Normalised Difference Vegetation Index (NDVI) data set (2000-2014) to estimate, on a pixel-by-pixel basis, the onset of plant growth. We then summed NDVI values from onset of spring to the average time of peak NDVI to give an estimate of annual plant productivity. This remotely sensed productivity measure was then compared, at two different spatial scales, with the peak plant biomass field data. At both the local scale, surrounding the field data site, and the larger regional scale, our NDVI measure was found to predict plant biomass (adjusted R 2 = 0.51 and 0.44, respectively). The commonly used ‘maximum NDVI’ plant productivity index showed no relationship with plant biomass, likely due to some years having very few cloud-free images available during the peak plant growing season. Thus, we propose this new summed NDVI from onset of spring to time of peak NDVI as a proxy of large-scale plant productivity for regions such as the Arctic where climatic conditions restrict the availability of cloud-free images.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
The Impact of Missing Background Data on Subpopulation Estimation
ERIC Educational Resources Information Center
Rutkowski, Leslie
2011-01-01
Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…
To the horizon and beyond: Weak lensing of the CMB and binary inspirals into horizonless objects
NASA Astrophysics Data System (ADS)
Kesden, Michael
This thesis examines two predictions of general relativity: weak lensing and gravitational waves. The cosmic microwave background (CMB) is gravitationally lensed by the large-scale structure between the observer and the last- scattering surface. This weak lensing induces non-Gaussian correlations that can be used to construct estimators for the deflection field. The error and bias of these estimators are derived and used to analyze the viability of lensing reconstruction for future CMB experiments. Weak lensing also affects the one-point probability distribution function of the CMB. The skewness and kurtosis induced by lensing and the Sunayev- Zel'dovich (SZ) effect are calculated as functions of the angular smoothing scale of the map. While these functions offer the advantage of easy computability, only the skewness from lensing-SZ correlations can potentially be detected, even in the limit of the largest amplitude fluctuations allowed by observation. Lensing estimators are also essential to constrain inflation, the favored explanation for large-scale isotropy and the origin of primordial perturbations. B-mode polarization is considered to be a "smoking-gun" signature of inflation, and lensing estimators can be used to recover primordial B-modes from lensing-induced contamination. The ability of future CMB experiments to constrain inflation is assessed as functions of survey size and instrumental sensitivity. A final application of lensing estimators is to constrain a possible cutoff in primordial density perturbations on near-horizon scales. The paucity of independent modes on such scales limits the statistical certainty of such a constraint. Measurements of the deflection field can be used to constrain at the 3s level the existence of a cutoff large enough to account for current CMB observations. A final chapter of this thesis considers an independent topic: the gravitational-wave (GW) signature of a binary inspiral into a horizonless object. If the supermassive objects at galactic centers lack the horizons of traditional black holes, inspiraling objects could emit GWs after passing within their surfaces. The GWs produced by such an inspiral are calculated, revealing distinctive features potentially observable by future GW observatories.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
A new framework to increase the efficiency of large-scale solar power plants.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Kleissl, Jan P.
2015-11-01
A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.
Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente
2016-08-01
In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population parameters. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Sankey, T.; Donald, J.; McVay, J.
2015-12-01
High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.
NASA Astrophysics Data System (ADS)
Garatuza-Payan, J.; Yepez, E. A.; Watts, C.; Rodriguez, J. C.; Valdez-Torres, L. C.; Robles-Morua, A.
2013-05-01
Water security, can be defined as the reliable supply in quantity and quality of water to help sustain future populations and maintaining ecosystem health and productivity. Water security is rapidly declining in many parts of the world due to population growth, drought, climate change, salinity, pollution, land use change, over-allocation and over-utilization, among other issues. Governmental offices (such as the Comision Nacional del Agua in Mexico, CONAGUA) require and conduct studies to estimate reliable water balances at regional or continental scales in order to provide reasonable assessments of the amount of water that can be provided (from surface or ground water sources) to supply all the human needs while maintaining natural vegetation, on an operational basis and, more important, under disturbances, such as droughts. Large scale estimates of evapotranspiration (ET), a critical component of the water cycle, are needed for a better comprehension of the hydrological cycle at large scales, which, in most water balances is left as the residual. For operational purposes, such water balance estimates can not rely on ET measurements since they do not exist, should be simple and require the least ground information possible, information that is often scarce or does not exist at all. Given this limitation, the use of remotely sensed data to estimate ET could supplement the lack of ground information, particularly in remote regions In this study, a simple method, based on the Makkink equation is used to estimate ET for large areas at high spatial resolutions (1 km). The Makkink model used here is forced using three remotely sensed datasets. First, the model uses solar radiation estimates obtained from the Geostationary Operational Environmental Satellite (GOES); Second, the model uses an Enhanced Vegetation Index (EVI) obtained from the Moderate-resolution Imaging Spectroradiometer (MODIS) normalized to get an estimate for vegetation amount and land use which was used in a "kind of" crop factor manner for all vegetation types (including agricultural fields). Finally, the model uses air temperature and humidity, both extracted from the North American Land Data Assimilation System (NLDAS) database. ET estimates were then compared to ground truth data from four sites where long-term Eddy Covariance (EC) measurements of ET were conducted. This approach was developed and applied in Northern Mexico. Emphasis was placed on trying to minimize the large uncertainties that still remained on the temporal evolution and the spatial repartition of ET. Results show good agreement with ground data (with r2 greater than 0.7 on daily ET estimates) from the four sites evaluated using different vegetation types hence reducing the spatial uncertainties. Estimates of total annual ET were used in a water balance, assessing ground water availability for eleven aquifers in the state of Chihuahua. Annual ET in a four-year analysis period, ranged from 200 to 280 mm/year, representing 63 to 83 % of total annual precipitation, which reflects the importance of this component in the water balance. A GIS tool kit is under development to support decision makers at CONAGUA.
NASA Astrophysics Data System (ADS)
Westberg, D. J.; Soja, A. J.; Tchebakova, N.; Parfenova, E. I.; Kukavskaya, E.; de Groot, B.; McRae, D.; Conard, S. G.; Stackhouse, P. W., Jr.
2012-12-01
Estimating the amount of biomass burned during fire events is challenging, particularly in remote and diverse regions, like those of the Former Soviet Union (FSU). Historically, we have typically assumed 25 tons of carbon per hectare (tC/ha) is emitted, however depending on the ecosystem and severity, biomass burning emissions can range from 2 to 75 tC/ha. Ecosystems in the FSU span from the tundra through the taiga to the forest-steppe, steppe and desserts and include the extensive West Siberian lowlands, permafrost-lain forests and agricultural lands. Excluding this landscape disparity results in inaccurate emissions estimates and incorrect assumptions in the transport of these emissions. In this work, we present emissions based on a hybrid ecosystem map and explicit estimates of fuel that consider the depth of burning based on the Canadian Forest Fire Weather Index System. Specifically, the ecosystem map is a fusion of satellite-based data, a detailed ecosystem map and Alexeyev and Birdsey carbon storage data, which is used to build carbon databases that include the forest overstory and understory, litter, peatlands and soil organic material for the FSU. We provide a range of potential carbon consumption estimates for low- to high-severity fires across the FSU that can be used with fire weather indices to more accurately estimate fire emissions. These data can be incorporated at ecoregion and administrative territory scales and are optimized for use in large-scale Chemical Transport Models. Additionally, paired with future climate scenarios and ecoregion cover, these carbon consumption data can be used to estimate potential emissions.
NASA Technical Reports Server (NTRS)
Takayabu, Yukari N.; Shige, Shoichi; Tao, Wei-Kuo; Hirota, Nagio
2010-01-01
The global hydrological cycle is central to the Earth's climate system, with rainfall and the physics of its formation acting as the key links in the cycle. Two-thirds of global rainfall occurs in the Tropics. Associated with this rainfall is a vast amount of heat, which is known as latent heat. It arises mainly due to the phase change of water vapor condensing into liquid droplets; three-fourths of the total heat energy available to the Earth's atmosphere comes from tropical rainfall. In addition, fresh water provided by tropical rainfall and its variability exerts a large impact upon the structure and motions of the upper ocean layer. Three-dimensional distributions of latent heating estimated from Tropical Rainfall Measuring Mission Precipitation Radar (TRMM PR)utilizing the Spectral Latent Heating (SLH) algorithm are analyzed. Mass-weighted and vertically integrated latent heating averaged over the tropical oceans is estimated as approx.72.6 J/s (approx.2.51 mm/day), and that over tropical land is approx.73.7 J/s (approx.2.55 mm/day), for 30degN-30degS. It is shown that non-drizzle precipitation over tropical and subtropical oceans consists of two dominant modes of rainfall systems, deep systems and congestus. A rough estimate of shallow mode contribution against the total heating is about 46.7 % for the average tropical oceans, which is substantially larger than 23.7 % over tropical land. While cumulus congestus heating linearly correlates with the SST, deep mode is dynamically bounded by large-scale subsidence. It is notable that substantial amount of rain, as large as 2.38 mm day-1 in average, is brought from congestus clouds under the large-scale subsiding circulation. It is also notable that even in the region with SST warmer than 28 oC, large-scale subsidence effectively suppresses the deep convection, remaining the heating by congestus clouds. Our results support that the entrainment of mid-to-lower-tropospheric dry air, which accompanies the large-scale subsidence is the major factor suppressing the deep convection. Therefore, representation of the realistic entrainment is very important for proper reproduction of precipitation distribution and resultant large-scale circulation.
[Review of estimation on oceanic primary productivity by using remote sensing methods.
Xu, Hong Yun; Zhou, Wei Feng; Ji, Shi Jian
2016-09-01
Accuracy estimation of oceanic primary productivity is of great significance in the assessment and management of fisheries resources, marine ecology systems, global change and other fields. The traditional measurement and estimation of oceanic primary productivity has to rely on in situ sample data by vessels. Satellite remote sensing has advantages of providing dynamic and eco-environmental parameters of ocean surface at large scale in real time. Thus, satellite remote sensing has increasingly become an important means for oceanic primary productivity estimation on large spatio-temporal scale. Combining with the development of ocean color sensors, the models to estimate the oceanic primary productivity by satellite remote sensing have been developed that could be mainly summarized as chlorophyll-based, carbon-based and phytoplankton absorption-based approach. The flexibility and complexity of the three kinds of models were presented in the paper. On this basis, the current research status for global estimation of oceanic primary productivity was analyzed and evaluated. In view of these, four research fields needed to be strengthened in further stu-dy: 1) Global oceanic primary productivity estimation should be segmented and studied, 2) to dee-pen the research on absorption coefficient of phytoplankton, 3) to enhance the technology of ocea-nic remote sensing, 4) to improve the in situ measurement of primary productivity.
In large-scale studies, it is often neither feasible nor necessary to obtain the large samples of 400 particles advocated by many geomorphologists to adequately quantify streambed surface particle-size distributions. Synoptic surveys such as U.S. Environmental Protection Agency...
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
NASA Technical Reports Server (NTRS)
Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna
2016-01-01
This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.
On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Vu, Bruce; Kandula, Max
2003-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
The mean density and two-point correlation function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1988-01-01
The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.
Yule, D.L.; Stockwell, J.D.; Black, J.A.; Cullis, K.I.; Cholwek, G.A.; Myers, J.T.
2008-01-01
Systematic underestimation of fish age can impede understanding of recruitment variability and adaptive strategies (like longevity) and can bias estimates of survivorship. We suspected that previous estimates of annual survival (S; range = 0.20-0.44) for Lake Superior ciscoes Coregonus artedi developed from scale ages were biased low. To test this hypothesis, we estimated the total instantaneous mortality rate of adult ciscoes from the Thunder Bay, Ontario, stock by use of cohort-based catch curves developed from commercial gill-net catches and otolith-aged fish. Mean S based on otolith ages was greater for adult females (0.80) than for adult males (0.75), but these differences were not significant. Applying the results of a study of agreement between scale and otolith ages, we modeled a scale age for each otolith-aged fish to reconstruct catch curves. Using modeled scale ages, estimates of S (0.42 for females, 0.36 for males) were comparable with those reported in past studies. We conducted a November 2005 acoustic and midwater trawl survey to estimate the abundance of ciscoes when the fish were being harvested for roe. Estimated exploitation rates were 0.085 for females and 0.025 for males, and the instantaneous rates of fishing mortality were 0.089 for females and 0.025 for males. The instantaneous rates of natural mortality were 0.131 and 0.265 for females and males, respectively. Using otolith ages, we found that strong year-classes at large during November 2005 were caught in high numbers as age-1 fish in previous annual bottom trawl surveys, whereas weak or absent year-classes were not. For decades, large-scale fisheries on the Great Lakes were allowed to operate because ciscoes were assumed to be short lived and to have regular recruitment. We postulate that the collapse of these fisheries was linked in part to a misunderstanding of cisco biology driven by scale-ageing error. ?? Copyright by the American Fisheries Society 2008.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
The scaling of contact rates with population density for the infectious disease models.
Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip
2013-08-01
Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Should fatty acid signature proportions sum to 1 for diet estimation?
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.
2016-01-01
Knowledge of predator diets, including how diets might change through time or differ among predators, provides essential insights into their ecology. Diet estimation therefore remains an active area of research within quantitative ecology. Quantitative fatty acid signature analysis (QFASA) is an increasingly common method of diet estimation. QFASA is based on a data library of prey signatures, which are vectors of proportions summarizing the fatty acid composition of lipids, and diet is estimated as the mixture of prey signatures that most closely approximates a predator’s signature. Diets are typically estimated using proportions from a subset of all fatty acids that are known to be solely or largely influenced by diet. Given the subset of fatty acids selected, the current practice is to scale their proportions to sum to 1.0. However, scaling signature proportions has the potential to distort the structural relationships within a prey library and between predators and prey. To investigate that possibility, we compared the practice of scaling proportions with two alternatives and found that the traditional scaling can meaningfully bias diet estimators under some conditions. Two aspects of the prey types that contributed to a predator’s diet influenced the magnitude of the bias: the degree to which the sums of unscaled proportions differed among prey types and the identifiability of prey types within the prey library. We caution investigators against the routine scaling of signature proportions in QFASA.
The imprint of surface fluxes and transport on variations in total column carbon dioxide
NASA Astrophysics Data System (ADS)
Keppel-Aleks, G.; Wennberg, P. O.; Washenfelder, R. A.; Wunch, D.; Schneider, T.; Toon, G. C.; Andres, R. J.; Blavier, J.-F.; Connor, B.; Davis, K. J.; Desai, A. R.; Messerschmidt, J.; Notholt, J.; Roehl, C. M.; Sherlock, V.; Stephens, B. B.; Vay, S. A.; Wofsy, S. C.
2011-07-01
New observations of the vertically integrated CO2 mixing ratio, ⟨CO2⟩, from ground-based remote sensing show that variations in ⟨CO2⟩ are primarily determined by large-scale flux patterns. They therefore provide fundamentally different information than observations made within the boundary layer, which reflect the combined influence of large scale and local fluxes. Observations of both ⟨CO2⟩ and CO2 concentrations in the free troposphere show that large-scale spatial gradients induce synoptic-scale temporal variations in ⟨CO2⟩ in the Northern Hemisphere midlatitudes through horizontal advection. Rather than obscure the signature of surface fluxes on atmospheric CO2, these synoptic-scale variations provide useful information that can be used to reveal the meridional flux distribution. We estimate the meridional gradient in ⟨CO2⟩ from covariations in ⟨CO2⟩ and potential temperature, θ, a dynamical tracer, on synoptic timescales to evaluate surface flux estimates commonly used in carbon cycle models. We find that Carnegie Ames Stanford Approach (CASA) biospheric fluxes underestimate both the ⟨CO2⟩ seasonal cycle amplitude throughout the Northern Hemisphere midlatitudes as well as the meridional gradient during the growing season. Simulations using CASA net ecosystem exchange (NEE) with increased and phase-shifted boreal fluxes better reflect the observations. Our simulations suggest that boreal growing season NEE (between 45-65° N) is underestimated by ~40 % in CASA. We describe the implications for this large seasonal exchange on inference of the net Northern Hemisphere terrestrial carbon sink.
The imprint of surface fluxes and transport on variations in total column carbon dioxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keppel-Aleks, G; Wennberg, PO; Washenfelder, RA
2012-01-01
New observations of the vertically integrated CO{sub 2} mixing ratio,
The imprint of surface fluxes and transport on variations in total column carbon dioxide
NASA Astrophysics Data System (ADS)
Keppel-Aleks, G.; Wennberg, P. O.; Washenfelder, R. A.; Wunch, D.; Schneider, T.; Toon, G. C.; Andres, R. J.; Blavier, J.-F.; Connor, B.; Davis, K. J.; Desai, A. R.; Messerschmidt, J.; Notholt, J.; Roehl, C. M.; Sherlock, V.; Stephens, B. B.; Vay, S. A.; Wofsy, S. C.
2012-03-01
New observations of the vertically integrated CO2 mixing ratio, ⟨CO2⟩, from ground-based remote sensing show that variations in CO2⟩ are primarily determined by large-scale flux patterns. They therefore provide fundamentally different information than observations made within the boundary layer, which reflect the combined influence of large-scale and local fluxes. Observations of both ⟨CO2⟩ and CO2 concentrations in the free troposphere show that large-scale spatial gradients induce synoptic-scale temporal variations in ⟨CO2⟩ in the Northern Hemisphere midlatitudes through horizontal advection. Rather than obscure the signature of surface fluxes on atmospheric CO2, these synoptic-scale variations provide useful information that can be used to reveal the meridional flux distribution. We estimate the meridional gradient in ⟨CO2⟩ from covariations in ⟨CO2⟩ and potential temperature, θ, a dynamical tracer, on synoptic timescales to evaluate surface flux estimates commonly used in carbon cycle models. We find that simulations using Carnegie Ames Stanford Approach (CASA) biospheric fluxes underestimate both the ⟨CO2⟩ seasonal cycle amplitude throughout the Northern Hemisphere midlatitudes and the meridional gradient during the growing season. Simulations using CASA net ecosystem exchange (NEE) with increased and phase-shifted boreal fluxes better fit the observations. Our simulations suggest that climatological mean CASA fluxes underestimate boreal growing season NEE (between 45-65° N) by ~40%. We describe the implications for this large seasonal exchange on inference of the net Northern Hemisphere terrestrial carbon sink.
Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method
ERIC Educational Resources Information Center
Rutkowski, Leslie; Zhou, Yan
2015-01-01
Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…
Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development
ERIC Educational Resources Information Center
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi
2017-01-01
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025
ERIC Educational Resources Information Center
Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.
2012-01-01
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…
ERIC Educational Resources Information Center
Cheek, Kim A.
2017-01-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude…
USDA-ARS?s Scientific Manuscript database
The combined use of water erosion models and geographic information systems (GIS) has facilitated soil loss estimation at the watershed scale. Tools such as the Geo-spatial interface for the Water Erosion Prediction Project (GeoWEPP) model provide a convenient spatially distributed soil loss estimat...
A general predictive model for estimating monthly ecosystem evapotranspiration
Ge Sun; Karrin Alstad; Jiquan Chen; Shiping Chen; Chelcy R. Ford; al. et.
2011-01-01
Accurately quantifying evapotranspiration (ET) is essential for modelling regional-scale ecosystem water balances. This study assembled an ET data set estimated from eddy flux and sapflow measurements for 13 ecosystems across a large climatic and management gradient from the United States, China, and Australia. Our objectives were to determine the relationships among...
Sample-Starved Large Scale Network Analysis
2016-05-05
As reported in our journal publication (G. Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” IEEE Trans on Signal Processing, vol... Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” in IEEE Trans on Signal Processing, vol. 63, no. 12, pp. 3218-3231, May 2015. 6. G
Accurate population genetic measurements require cryptic species identification in corals
NASA Astrophysics Data System (ADS)
Sheets, Elizabeth A.; Warner, Patricia A.; Palumbi, Stephen R.
2018-06-01
Correct identification of closely related species is important for reliable measures of gene flow. Incorrectly lumping individuals of different species together has been shown to over- or underestimate population differentiation, but examples highlighting when these different results are observed in empirical datasets are rare. Using 199 single nucleotide polymorphisms, we assigned 768 individuals in the Acropora hyacinthus and A. cytherea morphospecies complexes to each of eight previously identified cryptic genetic species and measured intraspecific genetic differentiation across three geographic scales (within reefs, among reefs within an archipelago, and among Pacific archipelagos). We then compared these calculations to estimated genetic differentiation at each scale with all cryptic genetic species mixed as if we could not tell them apart. At the reef scale, correct genetic species identification yielded lower F ST estimates and fewer significant comparisons than when species were mixed, raising estimates of short-scale gene flow. In contrast, correct genetic species identification at large spatial scales yielded higher F ST measurements than mixed-species comparisons, lowering estimates of long-term gene flow among archipelagos. A meta-analysis of published population genetic studies in corals found similar results: F ST estimates at small spatial scales were lower and significance was found less often in studies that controlled for cryptic species. Our results and these prior datasets controlling for cryptic species suggest that genetic differentiation among local reefs may be lower than what has generally been reported in the literature. Not properly controlling for cryptic species structure can bias population genetic analyses in different directions across spatial scales, and this has important implications for conservation strategies that rely on these estimates.
Singh, Nadia D.; Aquadro, Charles F.; Clark, Andrew G.
2009-01-01
Accurate assessment of local recombination rate variation is crucial for understanding the recombination process and for determining the impact of natural selection on linked sites. In Drosophila, local recombination intensity has been estimated primarily by statistical approaches, estimating the local slope of the relationship between the physical and genetic maps. However, these estimates are limited in resolution, and as a result, the physical scale at which recombination intensity varies in Drosophila is largely unknown. While there is some evidence suggesting as much as a 40-fold variation in crossover rate at a local scale in D. pseudoobscura, little is known about the fine-scale structure of recombination rate variation in D. melanogaster. Here, we experimentally examine the fine-scale distribution of crossover events in a 1.2 Mb region on the D. melanogaster X chromosome using a classic genetic mapping approach. Our results show that crossover frequency is significantly heterogeneous within this region, varying ~ 3.5 fold. Simulations suggest that this degree of heterogeneity is sufficient to affect levels of standing nucleotide diversity, although the magnitude of this effect is small. We recover no statistical association between empirical estimates of nucleotide diversity and recombination intensity, which is likely due to the limited number of loci sampled in our population genetic dataset. However, codon bias is significantly negatively correlated with fine-scale recombination intensity estimates, as expected. Our results shed light on the relevant physical scale to consider in evolutionary analyses relating to recombination rate, and highlight the motivations to increase the resolution of the recombination map in Drosophila. PMID:19504037
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Characterizing unknown systematics in large scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Nishant; Ho, Shirley; Myers, Adam D.
Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data,more » we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.« less
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Structural dynamics of tropical moist forest gaps
Maria O. Hunter; Michael Keller; Douglas Morton; Bruce Cook; Michael Lefsky; Mark Ducey; Scott Saleska; Raimundo Cosme de Oliveira; Juliana Schietti
2015-01-01
Gap phase dynamics are the dominant mode of forest turnover in tropical forests. However, gap processes are infrequently studied at the landscape scale. Airborne lidar data offer detailed information on three-dimensional forest structure, providing a means to characterize fine-scale (1 m) processes in tropical forests over large areas. Lidar-based estimates of forest...
NASA Technical Reports Server (NTRS)
Selkirk, Henry B.; Molod, Andrea M.
2014-01-01
Large-scale models such as GEOS-5 typically calculate grid-scale fractional cloudiness through a PDF parameterization of the sub-gridscale distribution of specific humidity. The GEOS-5 moisture routine uses a simple rectangular PDF varying in height that follows a tanh profile. While below 10 km this profile is informed by moisture information from the AIRS instrument, there is relatively little empirical basis for the profile above that level. ATTREX provides an opportunity to refine the profile using estimates of the horizontal variability of measurements of water vapor, total water and ice particles from the Global Hawk aircraft at or near the tropopause. These measurements will be compared with estimates of large-scale cloud fraction from CALIPSO and lidar retrievals from the CPL on the aircraft. We will use the variability measurements to perform studies of the sensitivity of the GEOS-5 cloud-fraction to various modifications to the PDF shape and to its vertical profile.
Observation-based Estimate of Climate Sensitivity with a Scaling Climate Response Function
NASA Astrophysics Data System (ADS)
Hébert, Raphael; Lovejoy, Shaun
2016-04-01
To properly adress the anthropogenic impacts upon the earth system, an estimate of the climate sensitivity to radiative forcing is essential. Observation-based estimates of climate sensitivity are often limited by their ability to take into account the slower response of the climate system imparted mainly by the large thermal inertia of oceans, they are nevertheless essential to provide an alternative to estimates from global circulation models and increase our confidence in estimates of climate sensitivity by the multiplicity of approaches. It is straightforward to calculate the Effective Climate Sensitivity(EffCS) as the ratio of temperature change to the change in radiative forcing; the result is almost identical to the Transient Climate Response(TCR), but it underestimates the Equilibrium Climate Sensitivity(ECS). A study of global mean temperature is thus presented assuming a Scaling Climate Response Function to deterministic radiative forcing. This general form is justified as there exists a scaling symmetry respected by the dynamics, and boundary conditions, over a wide range of scales and it allows for long-range dependencies while retaining only 3 parameter which are estimated empirically. The range of memory is modulated by the scaling exponent H. We can calculate, analytically, a one-to-one relation between the scaling exponent H and the ratio of EffCS to TCR and EffCS to ECS. The scaling exponent of the power law is estimated by a regression of temperature as a function of forcing. We consider for the analysis 4 different datasets of historical global mean temperature and 100 scenario runs of the Coupled Model Intercomparison Project Phase 5 distributed among the 4 Representative Concentration Pathways(RCP) scenarios. We find that the error function for the estimate on historical temperature is very wide and thus, many scaling exponent can be used without meaningful changes in the fit residuals of historical temperatures; their response in the year 2100 on the other hand, is very broad, especially for a low-emission scenario such as RCP 2.6. CMIP5 scenario runs thus allow for a narrower estimate of H which can then be used to estimate the ECS and TCR from the EffCS estimated from the historical data.
Large-angle correlations in the cosmic microwave background
NASA Astrophysics Data System (ADS)
Efstathiou, George; Ma, Yin-Zhe; Hanson, Duncan
2010-10-01
It has been argued recently by Copi et al. 2009 that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, inflationary Lambda cold dark matter (ΛCDM) cosmology. We compare various estimators of the temperature correlation function showing how they depend on assumptions of statistical isotropy and how they perform on the Wilkinson Microwave Anisotropy Probe (WMAP) 5-yr Internal Linear Combination (ILC) maps with and without a sky cut. We show that the low multipole harmonics that determine the large-scale features of the temperature correlation function can be reconstructed accurately from the data that lie outside the sky cuts. The reconstructions are only weakly dependent on the assumed statistical properties of the temperature field. The temperature correlation functions computed from these reconstructions are in good agreement with those computed from the ILC map over the whole sky. We conclude that the large-scale angular correlation function for our realization of the sky is well determined. A Bayesian analysis of the large-scale correlations is presented, which shows that the data cannot exclude the standard ΛCDM model. We discuss the differences between our results and those of Copi et al. Either there exists a violation of statistical isotropy as claimed by Copi et al., or these authors have overestimated the significance of the discrepancy because of a posteriori choices of estimator, statistic and sky cut.
Blaas, Harry; Kroeze, Carolien
2014-10-15
Biodiesel is increasingly considered as an alternative for fossil diesel. Biodiesel can be produced from rapeseed, palm, sunflower, soybean and algae. In this study, the consequences of large-scale production of biodiesel from micro-algae for eutrophication in four large European seas are analysed. To this end, scenarios for the year 2050 are analysed, assuming that in the 27 countries of the European Union fossil diesel will be replaced by biodiesel from algae. Estimates are made for the required fertiliser inputs to algae parks, and how this may increase concentrations of nitrogen and phosphorus in coastal waters, potentially leading to eutrophication. The Global NEWS (Nutrient Export from WaterSheds) model has been used to estimate the transport of nitrogen and phosphorus to the European coastal waters. The results indicate that the amount of nitrogen and phosphorus in the coastal waters may increase considerably in the future as a result of large-scale production of algae for the production of biodiesel, even in scenarios assuming effective waste water treatment and recycling of waste water in algae production. To ensure sustainable production of biodiesel from micro-algae, it is important to develop cultivation systems with low nutrient losses to the environment. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Basin-Scale Hydrologic Impacts of CO2 Storage: Regulatory and Capacity Implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birkholzer, J.T.; Zhou, Q.
Industrial-scale injection of CO{sub 2} into saline sedimentary basins will cause large-scale fluid pressurization and migration of native brines, which may affect valuable groundwater resources overlying the deep sequestration reservoirs. In this paper, we discuss how such basin-scale hydrologic impacts can (1) affect regulation of CO{sub 2} storage projects and (2) may reduce current storage capacity estimates. Our assessment arises from a hypothetical future carbon sequestration scenario in the Illinois Basin, which involves twenty individual CO{sub 2} storage projects in a core injection area suitable for long-term storage. Each project is assumed to inject five million tonnes of CO{sub 2}more » per year for 50 years. A regional-scale three-dimensional simulation model was developed for the Illinois Basin that captures both the local-scale CO{sub 2}-brine flow processes and the large-scale groundwater flow patterns in response to CO{sub 2} storage. The far-field pressure buildup predicted for this selected sequestration scenario suggests that (1) the area that needs to be characterized in a permitting process may comprise a very large region within the basin if reservoir pressurization is considered, and (2) permits cannot be granted on a single-site basis alone because the near- and far-field hydrologic response may be affected by interference between individual sites. Our results also support recent studies in that environmental concerns related to near-field and far-field pressure buildup may be a limiting factor on CO{sub 2} storage capacity. In other words, estimates of storage capacity, if solely based on the effective pore volume available for safe trapping of CO{sub 2}, may have to be revised based on assessments of pressure perturbations and their potential impact on caprock integrity and groundwater resources, respectively. We finally discuss some of the challenges in making reliable predictions of large-scale hydrologic impacts related to CO{sub 2} sequestration projects.« less
NASA Astrophysics Data System (ADS)
Hanna, Steven R.; Young, George S.
2017-01-01
What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);
Incorporating linguistic knowledge for learning distributed word representations.
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.
Incorporating Linguistic Knowledge for Learning Distributed Word Representations
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581
Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J
2012-11-07
Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.
Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.
2011-01-01
Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Liang, Yantao; Zhang, Yongyu; Wang, Nannan; Luo, Tingwei; Zhang, Yao; Rivkin, Richard B.
2017-01-01
Picophytoplankton are acknowledged to contribute significantly to primary production (PP) in the ocean while now the method to measure PP of picophytoplankton (PPPico) at large scales is not yet well established. Although the traditional 14C method and new technologies based on the use of stable isotopes (e.g., 13C) can be employed to accurately measure in situ PPPico, the time-consuming and labor-intensive shortage of these methods constrain their application in a survey on large spatiotemporal scales. To overcome this shortage, a modified carbon-based ocean productivity model (CbPM) is proposed for estimating the PPPico whose principle is based on the group-specific abundance, cellular carbon conversion factor (CCF), and temperature-derived growth rate of picophytoplankton. Comparative analysis showed that the estimated PPPico using CbPM method is significantly and positively related (r2 = 0.53, P < 0.001, n = 171) to the measured 14C uptake. This significant relationship suggests that CbPM has the potential to estimate the PPPico over large spatial and temporal scales. Currently this model application may be limited by the use of invariant cellular CCF and the relatively small data sets to validate the model which may introduce some uncertainties and biases. Model performance will be improved by the use of variable conversion factors and the larger data sets representing diverse growth conditions. Finally, we apply the CbPM-based model on the collected data during four cruises in the Bohai Sea in 2005. Model-estimated PPPico ranged from 0.1 to 11.9, 29.9 to 432.8, 5.5 to 214.9, and 2.4 to 65.8 mg C m-2 d-1 during March, June, September, and December, respectively. This study shed light on the estimation of global PPPico using carbon-based production model. PMID:29051755
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.
2016-07-01
Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.
Variability of the Bering Sea Circulation in the Period 1992-2010
2012-06-09
mas- sive sources of data (satellite altimetry, Argo drifters) may improve the accuracy of these estimates in the near future. Large-scale...Combining these data with in situ observations of temperature, salinity and subsurface currents allowed obtaining increasingly accurate estimates ...al. (2006) esti- mated the Kamchatka Current transport of 24 Sv (1 Sv = 106 m?/s), a value significantly higher than pre- vious estimates of
Jennifer C. Jenkins; Richard A. Birdsey
2000-01-01
As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...
Chen, T M; Chen, Q P; Liu, R C; Szot, A; Chen, S L; Zhao, J; Zhou, S S
2017-02-01
Hundreds of small-scale influenza outbreaks in schools are reported in mainland China every year, leading to a heavy disease burden which seriously impacts the operation of affected schools. Knowing the transmissibility of each outbreak in the early stage has become a major concern for public health policy-makers and primary healthcare providers. In this study, we collected all the small-scale outbreaks in Changsha (a large city in south central China with ~7·04 million population) from January 2005 to December 2013. Four simple and popularly used models were employed to calculate the reproduction number (R) of these outbreaks. Given that the duration of a generation interval Tc = 2·7 and the standard deviation (s.d.) σ = 1·1, the mean R estimated by an epidemic model, normal distribution and delta distribution were 2·51 (s.d. = 0·73), 4·11 (s.d. = 2·20) and 5·88 (s.d. = 5·00), respectively. When Tc = 2·9 and σ = 1·4, the mean R estimated by the three models were 2·62 (s.d. = 0·78), 4·72 (s.d. = 2·82) and 6·86 (s.d. = 6·34), respectively. The mean R estimated by gamma distribution was 4·32 (s.d. = 2·47). We found that the values of R in small-scale outbreaks in schools were higher than in large-scale outbreaks in a neighbourhood, city or province. Normal distribution, delta distribution, and gamma distribution models seem to more easily overestimate the R of influenza outbreaks compared to the epidemic model.
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean
NASA Astrophysics Data System (ADS)
Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.
2018-02-01
The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
Modelling Landscape-Level Numerical Responses of Predators to Prey: The Case of Cats and Rabbits
Cruz, Jennyffer; Glen, Alistair S.; Pech, Roger P.
2013-01-01
Predator-prey systems can extend over large geographical areas but empirical modelling of predator-prey dynamics has been largely limited to localised scales. This is due partly to difficulties in estimating predator and prey abundances over large areas. Collection of data at suitably large scales has been a major problem in previous studies of European rabbits (Oryctolagus cuniculus) and their predators. This applies in Western Europe, where conserving rabbits and predators such as Iberian lynx (Lynx pardinus) is important, and in other parts of the world where rabbits are an invasive species supporting populations of introduced, and sometimes native, predators. In pastoral regions of New Zealand, rabbits are the primary prey of feral cats (Felis catus) that threaten native fauna. We estimate the seasonal numerical response of cats to fluctuations in rabbit numbers in grassland–shrubland habitat across the Otago and Mackenzie regions of the South Island of New Zealand. We use spotlight counts over 1645 km of transects to estimate rabbit and cat abundances with a novel modelling approach that accounts simultaneously for environmental stochasticity, density dependence and varying detection probability. Our model suggests that cat abundance is related consistently to rabbit abundance in spring and summer, possibly through increased rabbit numbers improving the fecundity and juvenile survival of cats. Maintaining rabbits at low abundance should therefore suppress cat numbers, relieving predation pressure on native prey. Our approach provided estimates of the abundance of cats and rabbits over a large geographical area. This was made possible by repeated sampling within each season, which allows estimation of detection probabilities. A similar approach could be applied to predator-prey systems elsewhere, and could be adapted to any method of direct observation in which there is no double-counting of individuals. Reliable estimates of numerical responses are essential for managing both invasive and threatened predators and prey. PMID:24039978
Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales
Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.
2014-01-01
Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949
Scaling of surface energy fluxes using remotely sensed data
NASA Astrophysics Data System (ADS)
French, Andrew Nichols
Accurate estimates of evapotranspiration (ET) across multiple terrains would greatly ease challenges faced by hydrologists, climate modelers, and agronomists as they attempt to apply theoretical models to real-world situations. One ET estimation approach uses an energy balance model to interpret a combination of meteorological observations taken at the surface and data captured by remote sensors. However, results of this approach have not been accurate because of poor understanding of the relationship between surface energy flux and land cover heterogeneity, combined with limits in available resolution of remote sensors. The purpose of this study was to determine how land cover and image resolution affect ET estimates. Using remotely sensed data collected over El Reno, Oklahoma, during four days in June and July 1997, scale effects on the estimation of spatially distributed ET were investigated. Instantaneous estimates of latent and sensible heat flux were calculated using a two-source surface energy balance model driven by thermal infrared, visible-near infrared, and meteorological data. The heat flux estimates were verified by comparison to independent eddy-covariance observations. Outcomes of observations taken at coarser resolutions were simulated by aggregating remote sensor data and estimated surface energy balance components from the finest sensor resolution (12 meter) to hypothetical resolutions as coarse as one kilometer. Estimated surface energy flux components were found to be significantly dependent on observation scale. For example, average evaporative fraction varied from 0.79, using 12-m resolution data, to 0.93, using 1-km resolution data. Resolution effects upon flux estimates were related to a measure of landscape heterogeneity known as operational scale, reflecting the size of dominant landscape features. Energy flux estimates based on data at resolutions less than 100 m and much greater than 400 m showed a scale-dependent bias. But estimates derived from data taken at about 400-m resolution (the operational scale at El Reno) were susceptible to large error due to mixing of surface types. The El Reno experiments show that accurate instantaneous estimates of ET require precise image alignment and image resolutions finer than landscape operational scale. These findings are valuable for the design of sensors and experiments to quantify spatially-varying hydrologic processes.
Fire extinguishing tests -80 with methyl alcohol gasoline
NASA Astrophysics Data System (ADS)
Holmstedt, G.; Ryderman, A.; Carlsson, B.; Lennmalm, B.
1980-10-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15 fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results; and performances with small pools can hardly be correlated with results from large scale fires.
Is There Any Real Observational Contradictoty To The Lcdm Model?
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe
2011-01-01
In this talk, I am going to question the two apparent observational contradictories to LCDM cosmology---- the lack of large angle correlations in the cosmic microwave background, and the very large bulk flow of galaxy peculiar velocities. On the super-horizon scale, "Copi etal. (2009)” have been arguing that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, LCDM cosmology. I am going to argue that the "ad-hoc” discrepancy is due to the sub-optimal estimator of the low-l multipoles, and a posteriori statistics, which exaggerates the statistical significance. On Galactic scales, "Watkins et al. (2008)” shows that the very large bulk flow prefers a very large density fluctuation, which seems to contradict to the LCDM model. I am going to show that these results are due to their underestimation of the small scale velocity dispersion, and an arbitrary way of combining catalogues. With the appropriate way of combining catalogue data, as well as the treating the small scale velocity dispersion as a free parameter, the peculiar velocity field provides unconvincing evidence against LCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
NASA Astrophysics Data System (ADS)
Wyman, Mark; Khoury, Justin
2010-08-01
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the ΛCDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3σ level with ΛCDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8σ level, with an estimated probability between 3.3×10-11 and 3.6×10-9 in ΛCDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravity becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2σ level with bulk flow observations. The occurrence of the bullet cluster in these theories is ≈104 times more probable than in ΛCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyman, Mark; Khoury, Justin
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the {Lambda}CDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3{sigma} level with {Lambda}CDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8{sigma} level, with an estimated probability between 3.3x10{sup -11} and 3.6x10{sup -9} in {Lambda}CDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravitymore » becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2{sigma} level with bulk flow observations. The occurrence of the bullet cluster in these theories is {approx_equal}10{sup 4} times more probable than in {Lambda}CDM cosmology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A
In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the datamore » analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.« less
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
National land cover monitoring using large, permanent photo plots
Raymond L. Czaplewski; Glenn P. Catts; Paul W. Snook
1987-01-01
A study in the State of North Carplina, U.S.A. demonstrated that large, permanent photo plots (400 hectares) can be used to monitor large regions of land by using remote sensing techniques. Estimates of area in a variety of land cover categories were made by photointerpretation of medium-scale aerial photography from a single month using 111 photo plots. Many of these...
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
NASA Astrophysics Data System (ADS)
Swann, A. L. S.; Koven, C.; Lombardozzi, D.; Bonan, G. B.
2017-12-01
Evapotranspiration (ET) is a critical term in the surface energy budget as well as the water cycle. There are few direct measurements of ET, and thus the magnitude and variability is poorly constrained at large spatial scales. Estimates of the annual cycle of ET over the Amazon are critical because they influence predictions of the seasonal cycle of carbon fluxes, as well as atmospheric dynamics and circulation. We estimate ET for the Amazon basin using a water budget approach, by differencing rainfall, discharge, and time-varying storage from the Gravity Recovery and Climate Experiment. We find that the climatological annual cycle of ET over the Amazon basin upstream of Óbidos shows suppression of ET during the wet season, and higher ET during the dry season, consistent with flux tower based observations in seasonally dry forests. We also find a statistically significant decrease in ET over the time period 2002-2015 of -1.46 mm/yr. Our direct estimate of the seasonal cycle of ET is largely consistent with previous indirect estimates, including energy budget based approaches, an up-scaled station based estimate, and land surface model estimates, but suggests that suppression of ET during the wet season is underestimated by existing products. We further quantify possible contributors to the phasing of the seasonal cycle and downward time trend using land surface models.
Cosmic string induced peculiar velocities
NASA Technical Reports Server (NTRS)
Van Dalen, Anthony; Schramm, David N.
1988-01-01
This paper considers the scenario of a flat universe with a network of heavy cosmic strings as the primordial fluctuation spectrum. The joint probability of finding streaming velocities of at least 600 km/s on large scales and local peculiar velocities of less than 800 km/s is calculated. It is shown how the effects of loops breaking up and being born with a spectrum of sizes can be estimated. It is found that to obtain large-scale streaming velocities of at least 600 km/s, it is necessary that either a large value for beta G mu exist or the effect of loop fissioning and production details be considerable.
Dealing with Big Numbers: Representation and Understanding of Magnitudes outside of Human Experience
ERIC Educational Resources Information Center
Resnick, Ilyse; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Being able to estimate quantity is important in everyday life and for success in the STEM disciplines. However, people have difficulty reasoning about magnitudes outside of human perception (e.g., nanoseconds, geologic time). This study examines patterns of estimation errors across temporal and spatial magnitudes at large scales. We evaluated the…
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Considerations for interpreting probabilistic estimates of uncertainty of forest carbon
James E. Smith; Linda S. Heath
2000-01-01
Quantitative estimated of carbon inventories are needed as part of nationwide attempts to reduce net release of greenhouse gases and the associated climate forcing. Naturally, an appreciable amount of uncertainty is inherent in such large-scale assessments, especially since both science and policy issues are still evolving. Decision makers need an idea of the...
USDA-ARS?s Scientific Manuscript database
Many science questions in large-scale terrestrial ecology are concerned with changes in the Earth’s carbon cycle and ecosystems and the consequences for the Earth's carbon budget, ecosystem sustainability, and biodiversity [1]. To address these questions, we must know the distribution of aboveground...
While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...
Wechsler Adult Intelligence Scale-IV Dyads for Estimating Global Intelligence.
Girard, Todd A; Axelrod, Bradley N; Patel, Ronak; Crawford, John R
2015-08-01
All possible two-subtest combinations of the core Wechsler Adult Intelligence Scale-IV (WAIS-IV) subtests were evaluated as possible viable short forms for estimating full-scale IQ (FSIQ). Validity of the dyads was evaluated relative to FSIQ in a large clinical sample (N = 482) referred for neuropsychological assessment. Sample validity measures included correlations, mean discrepancies, and levels of agreement between dyad estimates and FSIQ scores. In addition, reliability and validity coefficients were derived from WAIS-IV standardization data. The Coding + Information dyad had the strongest combination of reliability and validity data. However, several other dyads yielded comparable psychometric performance, albeit with some variability in their particular strengths. We also observed heterogeneity between validity coefficients from the clinical and standardization-based estimates for several dyads. Thus, readers are encouraged to also consider the individual psychometric attributes, their clinical or research goals, and client or sample characteristics when selecting among the dyadic short forms. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Sharan, Nek; Matheou, Georgios; Dimotakis, Paul
2017-11-01
Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.
NASA Astrophysics Data System (ADS)
Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.
2017-12-01
Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.
Cooke, Georgina M; Schlub, Timothy E; Sherwin, William B; Ord, Terry J
2016-01-01
Quantifying the spatial scale of population connectivity is important for understanding the evolutionary potential of ecologically divergent populations and for designing conservation strategies to preserve those populations. For marine organisms like fish, the spatial scale of connectivity is generally set by a pelagic larval phase. This has complicated past estimates of connectivity because detailed information on larval movements are difficult to obtain. Genetic approaches provide a tractable alternative and have the added benefit of estimating directly the reproductive isolation of populations. In this study, we leveraged empirical estimates of genetic differentiation among populations with simulations and a meta-analysis to provide a general estimate of the spatial scale of genetic connectivity in marine environments. We used neutral genetic markers to first quantify the genetic differentiation of ecologically-isolated adult populations of a land dwelling fish, the Pacific leaping blenny (Alticus arnoldorum), where marine larval dispersal is the only probable means of connectivity among populations. We then compared these estimates to simulations of a range of marine dispersal scenarios and to collated FST and distance data from the literature for marine fish across diverse spatial scales. We found genetic connectivity at sea was extensive among marine populations and in the case of A. arnoldorum, apparently little affected by the presence of ecological barriers. We estimated that ~5000 km (with broad confidence intervals ranging from 810-11,692 km) was the spatial scale at which evolutionarily meaningful barriers to gene flow start to occur at sea, although substantially shorter distances are also possible for some taxa. In general, however, such a large estimate of connectivity has important implications for the evolutionary and conservation potential of many marine fish communities.
NASA Technical Reports Server (NTRS)
Allen, N. C.
1978-01-01
Implementation of SOLARES will input large quantities of heat continuously into a stationary location on the Earth's surface. The quantity of heat released by each of the SOlARES ground receivers, having a reflector orbit height of 6378 km, exceeds by 30 times that released by large power parks which were studied in detail. Using atmospheric models, estimates are presented for the local weather effects, the synoptic scale effects, and the global scale effects from such intense thermal radiation.
[Methods of high-throughput plant phenotyping for large-scale breeding and genetic experiments].
Afonnikov, D A; Genaev, M A; Doroshkov, A V; Komyshev, E G; Pshenichnikova, T A
2016-07-01
Phenomics is a field of science at the junction of biology and informatics which solves the problems of rapid, accurate estimation of the plant phenotype; it was rapidly developed because of the need to analyze phenotypic characteristics in large scale genetic and breeding experiments in plants. It is based on using the methods of computer image analysis and integration of biological data. Owing to automation, new approaches make it possible to considerably accelerate the process of estimating the characteristics of a phenotype, to increase its accuracy, and to remove a subjectivism (inherent to humans). The main technologies of high-throughput plant phenotyping in both controlled and field conditions, their advantages and disadvantages, and also the prospects of their use for the efficient solution of problems of plant genetics and breeding are presented in the review.
NASA Astrophysics Data System (ADS)
Gruzdev, A. N.
2017-07-01
Using the data of the ERA-Interim reanalysis, we have obtained estimates of changes in temperature, the geopotential and its large-scale zonal harmonics, wind velocity, and potential vorticity in the troposphere and stratosphere of the Northern and Southern hemispheres during the 11-year solar cycle. The estimates have been obtained using the method of multiple linear regression. Specific features of response of the indicated atmospheric parameters to the solar cycle have been revealed in particular regions of the atmosphere for a whole year and depending on the season. The results of the analysis indicate the existence of a reliable statistical relationship of large-scale dynamic and thermodynamic processes in the troposphere and stratosphere with the 11-year solar cycle.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
Review of the outer scale of the atmospheric turbulence
NASA Astrophysics Data System (ADS)
Ziad, Aziz
2016-07-01
Outer scale is a relevant parameter for the experimental performance evaluation of large telescopes. Different techniques have been used for the outer scale estimation. In situ measurements with radiosounding balloons have given very small values of outer scale. This latter has also been estimated directly at the ground level from the wavefront analysis with High Angular Resolution (HAR) techniques using interferometric or Shack-Hartmann or more generally AO systems data. Dedicated instruments have been also developed for the outer scale monitoring such as the Generalized Seeing Monitor (GSM) and the Monitor of Outer Scale Profile (MOSP). The measured values of outer scale from HAR techniques, GSM and MOSP are somewhat coherent and are larger than the in situ results. The main explanation of this difference comes from the definition of the outer scale itself. This paper aims to give a review in a non-exhaustive way of different techniques and instruments for the measurement of the outer scale. Comparisons of outer scale measurements will be discussed in the light of the different definitions of this parameter, the associated observable quantities and the atmospheric turbulence model as well.
NASA Astrophysics Data System (ADS)
Lange, Benjamin A.; Katlein, Christian; Nicolaus, Marcel; Peeken, Ilka; Flores, Hauke
2016-12-01
Multiscale sea ice algae observations are fundamentally important for projecting changes to sea ice ecosystems, as the physical environment continues to change. In this study, we developed upon previously established methodologies for deriving sea ice-algal chlorophyll a concentrations (chl a) from spectral radiation measurements, and applied these to larger-scale spectral surveys. We conducted four different under-ice spectral measurements: irradiance, radiance, transmittance, and transflectance, and applied three statistical approaches: Empirical Orthogonal Functions (EOF), Normalized Difference Indices (NDI), and multi-NDI. We developed models based on ice core chl a and coincident spectral irradiance/transmittance (N = 49) and radiance/transflectance (N = 50) measurements conducted during two cruises to the central Arctic Ocean in 2011 and 2012. These reference models were ranked based on two criteria: mean robustness R2 and true prediction error estimates. For estimating the biomass of a large-scale data set, the EOF approach performed better than the NDI, due to its ability to account for the high variability of environmental properties experienced over large areas. Based on robustness and true prediction error, the three most reliable models, EOF-transmittance, EOF-transflectance, and NDI-transmittance, were applied to two remotely operated vehicle (ROV) and two Surface and Under-Ice Trawl (SUIT) spectral radiation surveys. In these larger-scale chl a estimates, EOF-transmittance showed the best fit to ice core chl a. Application of our most reliable model, EOF-transmittance, to an 85 m horizontal ROV transect revealed large differences compared to published biomass estimates from the same site with important implications for projections of Arctic-wide ice-algal biomass and primary production.
Estimated generic prices for novel treatments for drug-resistant tuberculosis.
Gotham, Dzintars; Fortunak, Joseph; Pozniak, Anton; Khoo, Saye; Cooke, Graham; Nytko, Frederick E; Hill, Andrew
2017-04-01
The estimated worldwide annual incidence of MDR-TB is 480 000, representing 5% of TB incidence, but 20% of mortality. Multiple drugs have recently been developed or repurposed for the treatment of MDR-TB. Currently, treatment for MDR-TB costs thousands of dollars per course. To estimate generic prices for novel TB drugs that would be achievable given large-scale competitive manufacture. Prices for linezolid, moxifloxacin and clofazimine were estimated based on per-kilogram prices of the active pharmaceutical ingredient (API). Other costs were added, including formulation, packaging and a profit margin. The projected costs for sutezolid were estimated to be equivalent to those for linezolid, based on chemical similarity. Generic prices for bedaquiline, delamanid and pretomanid were estimated by assessing routes of synthesis, costs/kg of chemical reagents, routes of synthesis and per-step yields. Costing algorithms reflected variable regulatory requirements and efficiency of scale based on demand, and were validated by testing predictive ability against widely available TB medicines. Estimated generic prices were US$8-$17/month for bedaquiline, $5-$16/month for delamanid, $11-$34/month for pretomanid, $4-$9/month for linezolid, $4-$9/month for sutezolid, $4-$11/month for clofazimine and $4-$8/month for moxifloxacin. The estimated generic prices were 87%-94% lower than the current lowest available prices for bedaquiline, 95%-98% for delamanid and 94%-97% for linezolid. Estimated generic prices were $168-$395 per course for the STREAM trial modified Bangladesh regimens (current costs $734-$1799), $53-$276 for pretomanid-based three-drug regimens and $238-$507 for a delamanid-based four-drug regimen. Competitive large-scale generic manufacture could allow supplies of treatment for 5-10 times more MDR-TB cases within current procurement budgets. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abreu, P.; /Lisbon, IST; Aglietta, M.
2011-11-01
We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60{sup o}, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the {approx} 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shownmore » to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for. In this work, we have identified and quantified a systematic uncertainty affecting the energy determination of cosmic rays detected by the surface detector array of the Pierre Auger Observatory. This systematic uncertainty, induced by the influence of the geomagnetic field on the shower development, has a strength which depends on both the zenith and the azimuthal angles. Consequently, we have shown that it induces distortions of the estimated cosmic ray event rate at a given energy at the percent level in both the azimuthal and the declination distributions, the latter of which mimics an almost dipolar pattern. We have also shown that the induced distortions are already at the level of the statistical uncertainties for a number of events N {approx_equal} 32 000 (we note that the full Auger surface detector array collects about 6500 events per year with energies above 3 EeV). Accounting for these effects is thus essential with regard to the correct interpretation of large scale anisotropy measurements taking explicitly profit from the declination distribution.« less
Impact phenomena as factors in the evolution of the Earth
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Parmentier, E. M.
1984-01-01
It is estimated that 30 to 200 large impact basins could have been formed on the early Earth. These large impacts may have resulted in extensive volcanism and enhanced endogenic geologic activity over large areas. Initial modelling of the thermal and subsidence history of large terrestrial basins indicates that they created geologic and thermal anomalies which lasted for geologically significant times. The role of large-scale impact in the biological evolution of the Earth has been highlighted by the discovery of siderophile anomalies at the Cretaceous-Tertiary boundary and associated with North American microtektites. Although in neither case has an associated crater been identified, the observations are consistent with the deposition of projectile-contaminated high-speed ejecta from major impact events. Consideration of impact processes reveals a number of mechanisms by which large-scale impact may induce extinctions.
Food waste impact on municipal solid waste angle of internal friction.
Cho, Young Min; Ko, Jae Hac; Chi, Liqun; Townsend, Timothy G
2011-01-01
The impact of food waste content on the municipal solid waste (MSW) friction angle was studied. Using reconstituted fresh MSW specimens with different food waste content (0%, 40%, 58%, and 80%), 48 small-scale (100-mm-diameter) direct shear tests and 12 large-scale (430 mm × 430 mm) direct shear tests were performed. A stress-controlled large-scale direct shear test device allowing approximately 170-mm sample horizontal displacement was designed and used. At both testing scales, the mobilized internal friction angle of MSW decreased considerably as food waste content increased. As food waste content increased from 0% to 40% and from 40% to 80%, the mobilized internal friction angles (estimated using the mobilized peak (ultimate) shear strengths of the small-scale direct shear tests) decreased from 39° to 31° and from 31° to 7°, respectively, while those of large-scale tests decreased from 36° to 26° and from 26° to 15°, respectively. Most friction angle measurements produced in this study fell within the range of those previously reported for MSW. Copyright © 2010 Elsevier Ltd. All rights reserved.
Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-09-01
The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
Use of satellite and modelled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-03-01
The potential of coupling soil moisture and a~USLE-based model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in Central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e. the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the RUSLE/USLE, enhances the capability of the model to account for variations in event soil losses, being the soil moisture an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to of ~ 0.35 and a root-mean-square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
Sampling scales define occupancy and underlying occupancy-abundance relationships in animals.
Steenweg, Robin; Hebblewhite, Mark; Whittington, Jesse; Lukacs, Paul; McKelvey, Kevin
2018-01-01
Occupancy-abundance (OA) relationships are a foundational ecological phenomenon and field of study, and occupancy models are increasingly used to track population trends and understand ecological interactions. However, these two fields of ecological inquiry remain largely isolated, despite growing appreciation of the importance of integration. For example, using occupancy models to infer trends in abundance is predicated on positive OA relationships. Many occupancy studies collect data that violate geographical closure assumptions due to the choice of sampling scales and application to mobile organisms, which may change how occupancy and abundance are related. Little research, however, has explored how different occupancy sampling designs affect OA relationships. We develop a conceptual framework for understanding how sampling scales affect the definition of occupancy for mobile organisms, which drives OA relationships. We explore how spatial and temporal sampling scales, and the choice of sampling unit (areal vs. point sampling), affect OA relationships. We develop predictions using simulations, and test them using empirical occupancy data from remote cameras on 11 medium-large mammals. Surprisingly, our simulations demonstrate that when using point sampling, OA relationships are unaffected by spatial sampling grain (i.e., cell size). In contrast, when using areal sampling (e.g., species atlas data), OA relationships are affected by spatial grain. Furthermore, OA relationships are also affected by temporal sampling scales, where the curvature of the OA relationship increases with temporal sampling duration. Our empirical results support these predictions, showing that at any given abundance, the spatial grain of point sampling does not affect occupancy estimates, but longer surveys do increase occupancy estimates. For rare species (low occupancy), estimates of occupancy will quickly increase with longer surveys, even while abundance remains constant. Our results also clearly demonstrate that occupancy for mobile species without geographical closure is not true occupancy. The independence of occupancy estimates from spatial sampling grain depends on the sampling unit. Point-sampling surveys can, however, provide unbiased estimates of occupancy for multiple species simultaneously, irrespective of home-range size. The use of occupancy for trend monitoring needs to explicitly articulate how the chosen sampling scales define occupancy and affect the occupancy-abundance relationship. © 2017 by the Ecological Society of America.
NASA Technical Reports Server (NTRS)
Kim, Seung-Bum; Lee, Tong; Fukumori, Ichiro
2007-01-01
The present study examines processes governing the interannual variation of MLT in the eastern equatorial Pacific.Processes controlling the interannual variation of mixed layer temperature (MLT) averaged over the Nino-3 domain (5 deg N-5 deg S, 150 deg-90 deg W) are studied using an ocean data assimilation product that covers the period of 1993-2003. The overall balance is such that surface heat flux opposes the MLT change but horizontal advection and subsurface processes assist the change. Advective tendencies are estimated here as the temperature fluxes through the domain's boundaries, with the boundary temperature referenced to the domain-averaged temperature to remove the dependence on temperature scale. This allows the authors to characterize external advective processes that warm or cool the water within the domain as a whole. The zonal advective tendency is caused primarily by large-scale advection of warm-pool water through the western boundary of the domain. The meridional advective tendency is contributed to mostly by Ekman current advecting large-scale temperature anomalies through the southern boundary of the domain. Unlike many previous studies, the subsurface processes that consist of vertical mixing and entrainment are explicitly evaluated. In particular, a rigorous method to estimate entrainment allows an exact budget closure. The vertical mixing across the mixed layer (ML) base has a contribution in phase with the MLT change. The entrainment tendency due to the temporal change in ML depth is negligible compared to other subsurface processes. The entrainment tendency by vertical advection across the ML base is dominated by large-scale changes in upwelling and the temperature of upwelling water. Tropical instability waves (TIWs) result in smaller-scale vertical advection that warms the domain during La Nina cooling events. However, such a warming tendency is overwhelmed by the cooling tendency associated with the large-scale upwelling by a factor of 2. In summary, all the balance terms are important in the MLT budget except the entrainment due to lateral induction and temporal variation in ML depth. All three advective tendencies are primarily caused by large-scale and low-frequency processes, and they assist the Nino-3 MLT change.
NASA Astrophysics Data System (ADS)
Aliseda, Alberto; Bourgoin, Mickael; Eswirp Collaboration
2014-11-01
We present preliminary results from a recent grid turbulence experiment conducted at the ONERA wind tunnel in Modane, France. The ESWIRP Collaboration was conceived to probe the smallest scales of a canonical turbulent flow with very high Reynolds numbers. To achieve this, the largest scales of the turbulence need to be extremely big so that, even with the large separation of scales, the smallest scales would be well above the spatial and temporal resolution of the instruments. The ONERA wind tunnel in Modane (8 m -diameter test section) was chosen as a limit of the biggest large scales achievable in a laboratory setting. A giant inflatable grid (M = 0.8 m) was conceived to induce slowly-decaying homogeneous isotropic turbulence in a large region of the test section, with minimal structural risk. An international team or researchers collected hot wire anemometry, ultrasound anemometry, resonant cantilever anemometry, fast pitot tube anemometry, cold wire thermometry and high-speed particle tracking data of this canonical turbulent flow. While analysis of this large database, which will become publicly available over the next 2 years, has only started, the Taylor-scale Reynolds number is estimated to be between 400 and 800, with Kolmogorov scales as large as a few mm . The ESWIRP Collaboration is formed by an international team of scientists to investigate experimentally the smallest scales of turbulence. It was funded by the European Union to take advantage of the largest wind tunnel in Europe for fundamental research.
Miller, Matthew P.; Johnson, Henry M.; Susong, David D.; Wolock, David M.
2015-01-01
Understanding how watershed characteristics and climate influence the baseflow component of stream discharge is a topic of interest to both the scientific and water management communities. Therefore, the development of baseflow estimation methods is a topic of active research. Previous studies have demonstrated that graphical hydrograph separation (GHS) and conductivity mass balance (CMB) methods can be applied to stream discharge data to estimate daily baseflow. While CMB is generally considered to be a more objective approach than GHS, its application across broad spatial scales is limited by a lack of high frequency specific conductance (SC) data. We propose a new method that uses discrete SC data, which are widely available, to estimate baseflow at a daily time step using the CMB method. The proposed approach involves the development of regression models that relate discrete SC concentrations to stream discharge and time. Regression-derived CMB baseflow estimates were more similar to baseflow estimates obtained using a CMB approach with measured high frequency SC data than were the GHS baseflow estimates at twelve snowmelt dominated streams and rivers. There was a near perfect fit between the regression-derived and measured CMB baseflow estimates at sites where the regression models were able to accurately predict daily SC concentrations. We propose that the regression-derived approach could be applied to estimate baseflow at large numbers of sites, thereby enabling future investigations of watershed and climatic characteristics that influence the baseflow component of stream discharge across large spatial scales.
NASA Astrophysics Data System (ADS)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.
NASA Astrophysics Data System (ADS)
Wheeler, C. E.; Mitchard, E. T.; Lewis, S. L.
2017-12-01
Restoring degraded and deforested tropical lands to sequester carbon is widely considered to offer substantial climate change mitigation opportunities, if conducted over large spatial scales. Despite this assertion, explicit estimates of how much carbon could be sequestered because of large-scale restoration are rare and have large uncertainties. This is principally due to the many different characteristics of land available for restoration, and different potential restoration activities, which together cause very different rates of carbon sequestration. For different restoration pathways: natural regeneration of degraded and secondary forest, timber plantations and agroforestry, we estimate carbon sequestration rates from the published literature. Then based on tropical restoration commitments made under the Bonn challenge and using carbon density maps, these carbon sequestration rates were used to predict total pan-tropical carbon sequestration to 2100. Restoration of degraded or secondary forest via natural regeneration offers the greatest carbon sequestration potential, considerably exceeding the carbon captured by either timber plantations or agroforestry. This is predominantly due to naturally regenerating forests representing a more permanent store of carbon in comparison to timber plantations and agroforestry land-use options, which, due to their rotational nature, result in the sequential return of carbon to the atmosphere. If the Bonn Challenge is to achieve its ambition of providing substantial climate change mitigation from restoration it must incorporate large areas of natural regeneration back to an intact forest state, otherwise it stands to be a missed opportunity in helping meet the Paris climate change goals.
Locatelli, R.; Bousquet, P.; Chevallier, F.; ...
2013-10-08
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...
2016-06-09
When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.
When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less
Cluster galaxy dynamics and the effects of large-scale environment
NASA Astrophysics Data System (ADS)
White, Martin; Cohn, J. D.; Smit, Renske
2010-11-01
Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations, showing that the strong correlation of measures with mass and the large scatter in mass at fixed observable mitigate line-of-sight projections.
Russell, Matthew B.; D'Amato, Anthony W.; Schulz, Bethany K.; Woodall, Christopher W.; Domke, Grant M.; Bradford, John B.
2014-01-01
The contribution of understorey vegetation (UVEG) to forest ecosystem biomass and carbon (C) across diverse forest types has, to date, eluded quantification at regional and national scales. Efforts to quantify UVEG C have been limited to field-intensive studies or broad-scale modelling approaches lacking field measurements. Although large-scale inventories of UVEG C are not common, species- and community-level inventories of vegetation structure are available and may prove useful in quantifying UVEG C stocks. This analysis developed a general framework for estimating UVEG C stocks by employing per cent cover estimates of UVEG from a region-wide forest inventory coupled with an estimate of maximum UVEG C across the US Lake States (i.e. Michigan, Minnesota and Wisconsin). Estimates of UVEG C stocks from this approach reasonably align with expected C stocks in the study region, ranging from 0.86 ± 0.06 Mg ha-1 in red pine-dominated to 1.59 ± 0.06 Mg ha-1 for aspen/birch-dominated forest types. Although the data employed here were originally collected to assess broad-scale forest structure and diversity, this study proposes a framework for using UVEG inventories as a foundation for estimating C stocks in an often overlooked, yet important ecosystem C pool.
RENEB - Running the European Network of biological dosimetry and physical retrospective dosimetry.
Kulka, Ulrike; Abend, Michael; Ainsbury, Elizabeth; Badie, Christophe; Barquinero, Joan Francesc; Barrios, Lleonard; Beinke, Christina; Bortolin, Emanuela; Cucu, Alexandra; De Amicis, Andrea; Domínguez, Inmaculada; Fattibene, Paola; Frøvig, Anne Marie; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Jaworska, Alicja; Kriehuber, Ralf; Lindholm, Carita; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Meschini, Roberta; Mörtl, Simone; Della Monaca, Sara; Monteiro Gil, Octávia; Montoro, Alegria; Moquet, Jayne; Moreno, Mercedes; Oestreicher, Ursula; Palitti, Fabrizio; Pantelias, Gabriel; Patrono, Clarice; Piqueret-Stephan, Laure; Port, Matthias; Prieto, María Jesus; Quintens, Roel; Ricoul, Michelle; Romm, Horst; Roy, Laurence; Sáfrány, Géza; Sabatier, Laure; Sebastià, Natividad; Sommer, Sylwester; Terzoudi, Georgia; Testa, Antonella; Thierens, Hubert; Turai, Istvan; Trompier, François; Valente, Marco; Vaz, Pedro; Voisin, Philippe; Vral, Anne; Woda, Clemens; Zafiropoulos, Demetre; Wojcik, Andrzej
2017-01-01
A European network was initiated in 2012 by 23 partners from 16 European countries with the aim to significantly increase individualized dose reconstruction in case of large-scale radiological emergency scenarios. The network was built on three complementary pillars: (1) an operational basis with seven biological and physical dosimetric assays in ready-to-use mode, (2) a basis for education, training and quality assurance, and (3) a basis for further network development regarding new techniques and members. Techniques for individual dose estimation based on biological samples and/or inert personalized devices as mobile phones or smart phones were optimized to support rapid categorization of many potential victims according to the received dose to the blood or personal devices. Communication and cross-border collaboration were also standardized. To assure long-term sustainability of the network, cooperation with national and international emergency preparedness organizations was initiated and links to radiation protection and research platforms have been developed. A legal framework, based on a Memorandum of Understanding, was established and signed by 27 organizations by the end of 2015. RENEB is a European Network of biological and physical-retrospective dosimetry, with the capacity and capability to perform large-scale rapid individualized dose estimation. Specialized to handle large numbers of samples, RENEB is able to contribute to radiological emergency preparedness and wider large-scale research projects.
Predicting groundwater recharge for varying land cover and climate conditions - a global meta-study
NASA Astrophysics Data System (ADS)
Mohan, Chinchu; Western, Andrew W.; Wei, Yongping; Saft, Margarita
2018-05-01
Groundwater recharge is one of the important factors determining the groundwater development potential of an area. Even though recharge plays a key role in controlling groundwater system dynamics, much uncertainty remains regarding the relationships between groundwater recharge and its governing factors at a large scale. Therefore, this study aims to identify the most influential factors of groundwater recharge, and to develop an empirical model to estimate diffuse rainfall recharge at a global scale. Recharge estimates reported in the literature from various parts of the world (715 sites) were compiled and used in model building and testing exercises. Unlike conventional recharge estimates from water balance, this study used a multimodel inference approach and information theory to explain the relationship between groundwater recharge and influential factors, and to predict groundwater recharge at 0.5° resolution. The results show that meteorological factors (precipitation and potential evapotranspiration) and vegetation factors (land use and land cover) had the most predictive power for recharge. According to the model, long-term global average annual recharge (1981-2014) was 134 mm yr-1 with a prediction error ranging from -8 to 10 mm yr-1 for 97.2 % of cases. The recharge estimates presented in this study are unique and more reliable than the existing global groundwater recharge estimates because of the extensive validation carried out using both independent local estimates collated from the literature and national statistics from the Food and Agriculture Organization (FAO). In a water-scarce future driven by increased anthropogenic development, the results from this study will aid in making informed decisions about groundwater potential at a large scale.
Assessing estimation techniques for missing plot observations in the U.S. forest inventory
Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield
2012-01-01
The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...
John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan
2010-01-01
Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...
Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering
NASA Astrophysics Data System (ADS)
Bromley, B. C.; Warren, M. S.; Zurek, W. H.
1997-02-01
We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
Theodore Weller
2008-01-01
Regional conservation plans are increasingly used to plan for and protect biodiversity at large spatial scales however the means of quantitatively evaluating their effectiveness are rarely specified. Multiple-species approaches, particular those which employ site-occupancy estimation, have been proposed as robust and efficient alternatives for assessing the status of...
An eigenfunction method for reconstruction of large-scale and high-contrast objects.
Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P
2007-07-01
A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.
D.J. Hayes; W.B. Cohen
2006-01-01
This article describes the development of a methodology for scaling observations of changes in tropical forest cover to large areas at high temporal frequency from coarse-resolution satellite imagery. The approach for estimating proportional forest cover change as a continuous variable is based on a regression model that relates multispectral, multitemporal Moderate...
ERIC Educational Resources Information Center
Carbon, Claus-Christian
2010-01-01
Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A…
Comparison of WAIS-III Short Forms for Measuring Index and Full-Scale Scores
ERIC Educational Resources Information Center
Girard, Todd A.; Axelrod, Bradley N.; Wilkins, Leanne K.
2010-01-01
This investigation assessed the ability of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) short forms to estimate both index and IQ scores in a large, mixed clinical sample (N = 809). More specifically, a commonly used modification of Ward's seven-subtest short form (SF7-A), a recently proposed index-based SF7-C and eight-subtest…
Identification and measurement of shrub type vegetation on large scale aerial photography
NASA Technical Reports Server (NTRS)
Driscoll, R. S.
1970-01-01
Important range-shrub species were identified at acceptable levels of accuracy on large-scale 70 mm color and color infrared aerial photographs. Identification of individual shrubs was significantly higher, however, on color infrared. Photoscales smaller than 1:2400 had limited value except for mature individuals of relatively tall species, and then only if crown margins did not overlap and sharp contrast was evident between the species and background. Larger scale photos were required for low-growing species in dense stands. The crown cover for individual species was estimated from the aerial photos either with a measuring magnifier or a projected-scale micrometer. These crown cover measurements provide techniques for earth-resource analyses when used in conjunction with space and high-altitude remotely procured photos.
Gormley, Andrew M.; Forsyth, David M.; Wright, Elaine F.; Lyall, John; Elliott, Mike; Martini, Mark; Kappers, Benno; Perry, Mike; McKay, Meredith
2015-01-01
There is interest in large-scale and unbiased monitoring of biodiversity status and trend, but there are few published examples of such monitoring being implemented. The New Zealand Department of Conservation is implementing a monitoring program that involves sampling selected biota at the vertices of an 8-km grid superimposed over the 8.6 million hectares of public conservation land that it manages. The introduced brushtail possum (Trichosurus Vulpecula) is a major threat to some biota and is one taxon that they wish to monitor and report on. A pilot study revealed that the traditional method of monitoring possums using leg-hold traps set for two nights, termed the Trap Catch Index, was a constraint on the cost and logistical feasibility of the monitoring program. A phased implementation of the monitoring program was therefore conducted to collect data for evaluating the trade-off between possum occupancy–abundance estimates and the costs of sampling for one night rather than two nights. Reducing trapping effort from two nights to one night along four trap-lines reduced the estimated costs of monitoring by 5.8% due to savings in labour, food and allowances; it had a negligible effect on estimated national possum occupancy but resulted in slightly higher and less precise estimates of relative possum abundance. Monitoring possums for one night rather than two nights would provide an annual saving of NZ$72,400, with 271 fewer field days required for sampling. Possums occupied 60% (95% credible interval; 53–68) of sampling locations on New Zealand’s public conservation land, with a mean relative abundance (Trap Catch Index) of 2.7% (2.0–3.5). Possum occupancy and abundance were higher in forest than in non-forest habitats. Our case study illustrates the need to evaluate relationships between sampling design, cost, and occupancy–abundance estimates when designing and implementing large-scale occupancy–abundance monitoring programs. PMID:26029890
Large-scale dark diversity estimates: new perspectives with combined methods.
Ronk, Argo; de Bello, Francesco; Fibich, Pavel; Pärtel, Meelis
2016-09-01
Large-scale biodiversity studies can be more informative if observed diversity in a study site is accompanied by dark diversity, the set of absent although ecologically suitable species. Dark diversity methodology is still being developed and a comparison of different approaches is needed. We used plant data at two different scales (European and seven large regions) and compared dark diversity estimates from two mathematical methods: species co-occurrence (SCO) and species distribution modeling (SDM). We used plant distribution data from the Atlas Florae Europaeae (50 × 50 km grid cells) and seven different European regions (10 × 10 km grid cells). Dark diversity was estimated by SCO and SDM for both datasets. We examined the relationship between the dark diversity sizes (type II regression) and the overlap in species composition (overlap coefficient). We tested the overlap probability according to the hypergeometric distribution. We combined the estimates of the two methods to determine consensus dark diversity and composite dark diversity. We tested whether dark diversity and completeness of site diversity (log ratio of observed and dark diversity) are related to various natural and anthropogenic factors differently than simple observed diversity. Both methods provided similar dark diversity sizes and distribution patterns; dark diversity is greater in southern Europe. The regression line, however, deviated from a 1:1 relationship. The species composition overlap of two methods was about 75%, which is much greater than expected by chance. Both consensus and composite dark diversity estimates showed similar distribution patterns. Both dark diversity and completeness measures exhibit relationships to natural and anthropogenic factors different than those exhibited by observed richness. In summary, dark diversity revealed new biodiversity patterns which were not evident when only observed diversity was examined. A new perspective in dark diversity studies can incorporate a combination of methods.
Systematic effects of foreground removal in 21-cm surveys of reionization
NASA Astrophysics Data System (ADS)
Petrovic, Nada; Oh, S. Peng
2011-05-01
21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Summer circulation in the Mexican tropical Pacific
NASA Astrophysics Data System (ADS)
Trasviña, A.; Barton, E. D.
2008-05-01
The main components of large-scale circulation of the eastern tropical Pacific were identified in the mid 20th century, but the details of the circulation at length scales of 10 2 km or less, the mesoscale field, are less well known particularly during summer. The winter circulation is characterized by large mesoscale eddies generated by intense cross-shore wind pulses. These eddies propagate offshore to provide an important source of mesoscale variability for the eastern tropical Pacific. The summer circulation has not commanded similar attention, the main reason being that the frequent generation of hurricanes in the area renders in situ observations difficult. Before the experiment presented here, the large-scale summer circulation of the Gulf of Tehuantepec was thought to be dominated by a poleward flow along the coast. A drifter-deployment experiment carried out in June 2000, supported by satellite altimetry and wind data, was designed to characterize this hypothesized Costa Rica Coastal Current. We present a detailed comparison between altimetry-estimated geostrophic and in situ currents estimated from drifters. Contrary to expectation, no evidence of a coherent poleward coastal flow across the gulf was found. During the 10-week period of observations, we documented a recurrent pattern of circulation within 500 km of shore, forced by a combination of local winds and the regional-scale flow. Instead of the Costa Rica Coastal Current, we found a summer eddy field capable of influencing large areas of the eastern tropical Pacific. Even in summer, the cross-isthmus wind jet is capable of inducing eddy formation.
Quantum Entanglement of Matter and Geometry in Large Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.
2014-12-04
Standard quantum mechanics and gravity are used to estimate the mass and size of idealized gravitating systems where position states of matter and geometry become indeterminate. It is proposed that well-known inconsistencies of standard quantum field theory with general relativity on macroscopic scales can be reconciled by nonstandard, nonlocal entanglement of field states with quantum states of geometry. Wave functions of particle world lines are used to estimate scales of geometrical entanglement and emergent locality. Simple models of entanglement predict coherent fluctuations in position of massive bodies, of Planck scale origin, measurable on a laboratory scale, and may account formore » the fact that the information density of long lived position states in Standard Model fields, which is determined by the strong interactions, is the same as that determined holographically by the cosmological constant.« less
A Monte-Carlo Bayesian framework for urban rainfall error modelling
NASA Astrophysics Data System (ADS)
Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian
2016-04-01
Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.
The Role of Satellite Imagery to Improve Pastureland Estimates in South America
NASA Astrophysics Data System (ADS)
Graesser, J.
2015-12-01
Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current areal, land use, and scale estimates of livestock grazing using satellite imagery.
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
NASA Astrophysics Data System (ADS)
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
NASA Astrophysics Data System (ADS)
Viesca, R. C.; Garagash, D.
2013-12-01
Seismological estimates of fracture energy show a scaling with the total slip of an earthquake [e.g., Abercrombie and Rice, GJI 2005]. Potential sources for this scale dependency are coseismic fault strength reductions that continue with increasing slip or an increasing amount of off-fault inelastic deformation with dynamic rupture propagation [e.g., Andrews, JGR 2005; Rice, JGR 2006]. Here, we investigate the former mechanism by solving for the slip dependence of fracture energy at the crack tip of a dynamically propagating rupture in which weakening takes place by strong reductions of friction via flash heating of asperity contacts and thermal pressurization of pore fluid leading to reductions in effective normal stress. Laboratory measurements of small characteristic slip evolution distances for friction (~10 μm at low slip rates of μm-mm/s, possibly up to 1 mm for slip rates near 0.1 m/s) [e.g., Marone and Kilgore, Nature 1993; Kohli et al., JGR 2011] imply that flash weakening of friction occurs at small slips before any significant thermal pressurization and may thus have a negligible contribution to the total fracture energy [Brantut and Rice, GRL 2011; Garagash, AGU 2011]. The subsequent manner of weakening under thermal pressurization (the dominant contributor to fracture energy) spans a range of behavior from the deformation of a finite-thickness shear zone in which diffusion is negligible (i.e., undrained-adiabatic) to that in which large-scale diffusion obscures the existence of a thin shear zone and thermal pressurization effectively occurs by the heating of slip on a plane. Separating the contribution of flash heating, the dynamic rupture solutions reduce to a problem with a single parameter, which is the ratio of the undrained-adiabatic slip-weakening distance (δc) to the characteristic slip-on-a-plane slip-weakening distance (L*). However, for any value of the parameter, there are two end-member scalings of the fracture energy: for small slip, the undrained-adiabatic behavior expectedly results in fracture energy scaling as G ~ δ^2, and for large slip (where TP approaches slip on a plane) we find that G ~ δ^(2/3). This last result is a slight correction to estimates made assuming a constant, kinematically imposed slip rate and slip-on-a-plane TP resulting in G ~ δ^(1/2) [Rice, JGR 2006]. We compile fracture energy estimates of both continental and subduction zone earthquakes. In doing so, we incorporate independent estimates of fault prestress to distinguish fracture energy G from the parameter G' defined by Abercrombie and Rice [2005], which represents the energetic quantity that is most directly inferred following seismological estimates of radiated energy, seismic moment and source radius. We find that the dynamic rupture solutions (which account for the variable manner of thermal pressurization and result in a self-consistent slip rate history) allow for a close match of the estimated fracture energy over several orders of total event slip, further supporting the proposed explanation that fracture energy scaling may largely be attributed to a fault strength that weakens gradually with slip, and additionally, the potential prevalence of thermal pressurization.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
An empirical, integrated forest biomass monitoring system
NASA Astrophysics Data System (ADS)
Kennedy, Robert E.; Ohmann, Janet; Gregory, Matt; Roberts, Heather; Yang, Zhiqiang; Bell, David M.; Kane, Van; Hughes, M. Joseph; Cohen, Warren B.; Powell, Scott; Neeti, Neeti; Larrue, Tara; Hooper, Sam; Kane, Jonathan; Miller, David L.; Perkins, James; Braaten, Justin; Seidl, Rupert
2018-02-01
The fate of live forest biomass is largely controlled by growth and disturbance processes, both natural and anthropogenic. Thus, biomass monitoring strategies must characterize both the biomass of the forests at a given point in time and the dynamic processes that change it. Here, we describe and test an empirical monitoring system designed to meet those needs. Our system uses a mix of field data, statistical modeling, remotely-sensed time-series imagery, and small-footprint lidar data to build and evaluate maps of forest biomass. It ascribes biomass change to specific change agents, and attempts to capture the impact of uncertainty in methodology. We find that: • A common image framework for biomass estimation and for change detection allows for consistent comparison of both state and change processes controlling biomass dynamics. • Regional estimates of total biomass agree well with those from plot data alone. • The system tracks biomass densities up to 450-500 Mg ha-1 with little bias, but begins underestimating true biomass as densities increase further. • Scale considerations are important. Estimates at the 30 m grain size are noisy, but agreement at broad scales is good. Further investigation to determine the appropriate scales is underway. • Uncertainty from methodological choices is evident, but much smaller than uncertainty based on choice of allometric equation used to estimate biomass from tree data. • In this forest-dominated study area, growth and loss processes largely balance in most years, with loss processes dominated by human removal through harvest. In years with substantial fire activity, however, overall biomass loss greatly outpaces growth. Taken together, our methods represent a unique combination of elements foundational to an operational landscape-scale forest biomass monitoring program.
Theory based scaling of edge turbulence and implications for the scrape-off layer width
NASA Astrophysics Data System (ADS)
Myra, J. R.; Russell, D. A.; Zweben, S. J.
2016-11-01
Turbulence and plasma parameter data from the National Spherical Torus Experiment (NSTX) [Ono et al., Nucl. Fusion 40, 557 (2000)] is examined and interpreted based on various theoretical estimates. In particular, quantities of interest for assessing the role of turbulent transport on the midplane scrape-off layer heat flux width are assessed. Because most turbulence quantities exhibit large scatter and little scaling within a given operation mode, this paper focuses on length and time scales and dimensionless parameters between operational modes including Ohmic, low (L), and high (H) modes using a large NSTX edge turbulence database [Zweben et al., Nucl. Fusion 55, 093035 (2015)]. These are compared with theoretical estimates for drift and interchange rates, profile modification saturation levels, a resistive ballooning condition, and dimensionless parameters characterizing L and H mode conditions. It is argued that the underlying instability physics governing edge turbulence in different operational modes is, in fact, similar, and is consistent with curvature-driven drift ballooning. Saturation physics, however, is dependent on the operational mode. Five dimensionless parameters for drift-interchange turbulence are obtained and employed to assess the importance of turbulence in setting the scrape-off layer heat flux width λq and its scaling. An explicit proportionality of the width λq to the safety factor and major radius (qR) is obtained under these conditions. Quantitative estimates and reduced model numerical simulations suggest that the turbulence mechanism is not negligible in determining λq in NSTX, at least for high plasma current discharges.
Sandman, Antonia Nyström; Näslund, Johan; Gren, Ing-Marie; Norling, Karl
2018-05-05
Macrofaunal activities in sediments modify nutrient fluxes in different ways including the expression of species-specific functional traits and density-dependent population processes. The invasive polychaete genus Marenzelleria was first observed in the Baltic Sea in the 1980s. It has caused changes in benthic processes and affected the functioning of ecosystem services such as nutrient regulation. The large-scale effects of these changes are not known. We estimated the current Marenzelleria spp. wet weight biomass in the Baltic Sea to be 60-87 kton (95% confidence interval). We assessed the potential impact of Marenzelleria spp. on phosphorus cycling using a spatially explicit model, comparing estimates of expected sediment to water phosphorus fluxes from a biophysical model to ecologically relevant experimental measurements of benthic phosphorus flux. The estimated yearly net increases (95% CI) in phosphorous flux due to Marenzelleria spp. were 4.2-6.1 kton based on the biophysical model and 6.3-9.1 kton based on experimental data. The current biomass densities of Marenzelleria spp. in the Baltic Sea enhance the phosphorus fluxes from sediment to water on a sea basin scale. Although high densities of Marenzelleria spp. can increase phosphorus retention locally, such biomass densities are uncommon. Thus, the major effect of Marenzelleria seems to be a large-scale net decrease in the self-cleaning capacity of the Baltic Sea that counteracts human efforts to mitigate eutrophication in the region.
Theory based scaling of edge turbulence and implications for the scrape-off layer width
Myra, J. R.; Russell, D. A.; Zweben, S. J.
2016-11-01
Turbulence and plasma parameter data from the National Spherical Torus Experiment (NSTX) is examined and interpreted based on various theoretical estimates. In particular, quantities of interest for assessing the role of turbulent transport on the midplane scrape-off layer heat flux width are assessed. Because most turbulence quantities exhibit large scatter and little scaling within a given operation mode, this paper focuses on length and time scales and dimensionless parameters between operational modes including Ohmic, low (L), and high (H) modes using a large NSTX edge turbulence database. These are compared with theoretical estimates for drift and interchange rates, profile modificationmore » saturation levels, a resistive ballooning condition, and dimensionless parameters characterizing L and H mode conditions. It is argued that the underlying instability physics governing edge turbulence in different operational modes is, in fact, similar, and is consistent with curvature-driven drift ballooning. Saturation physics, however, is dependent on the operational mode. Five dimensionless parameters for drift-interchange turbulence are obtained and employed to assess the importance of turbulence in setting the scrape-off layer heat flux width λ q and its scaling. An explicit proportionality of the width λ q to the safety factor and major radius (qR) is obtained under these conditions. Lastly, quantitative estimates and reduced model numerical simulations suggest that the turbulence mechanism is not negligible in determining λ q in NSTX, at least for high plasma current discharges.« less
Bioregional monitoring design and occupancy estimation for two Sierra Nevadan amphibian taxa
Land-management agencies need quantitative, statistically rigorous monitoring data, often at large spatial and temporal scales, to support resource-management decisions. Monitoring designs typically must accommodate multiple ecological, logistical, political, and economic objec...
Eavesdropping on the Arctic: Automated bioacoustics reveal dynamics in songbird breeding phenology.
Oliver, Ruth Y; Ellis, Daniel P W; Chmura, Helen E; Krause, Jesse S; Pérez, Jonathan H; Sweet, Shannan K; Gough, Laura; Wingfield, John C; Boelman, Natalie T
2018-06-01
Bioacoustic networks could vastly expand the coverage of wildlife monitoring to complement satellite observations of climate and vegetation. This approach would enable global-scale understanding of how climate change influences phenomena such as migratory timing of avian species. The enormous data sets that autonomous recorders typically generate demand automated analyses that remain largely undeveloped. We devised automated signal processing and machine learning approaches to estimate dates on which songbird communities arrived at arctic breeding grounds. Acoustically estimated dates agreed well with those determined via traditional surveys and were strongly related to the landscape's snow-free dates. We found that environmental conditions heavily influenced daily variation in songbird vocal activity, especially before egg laying. Our novel approaches demonstrate that variation in avian migratory arrival can be detected autonomously. Large-scale deployment of this innovation in wildlife monitoring would enable the coverage necessary to assess and forecast changes in bird migration in the face of climate change.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
NASA Astrophysics Data System (ADS)
Nakamura, Yuki; Ashi, Juichiro; Morita, Sumito
2016-04-01
To clarify timing and scale of past submarine landslides is important to understand formation processes of the landslides. The study area is in a part of continental slope of the Japan Trench, where a number of large-scale submarine landslide (slump) deposits have been identified in Pliocene and Quaternary formations by analysing METI's 3D seismic data "Sanrikuoki 3D" off Shimokita Peninsula (Morita et al., 2011). As structural features, swarm of parallel dikes which are likely dewatering paths formed accompanying the slumping deformation, and slip directions are basically perpendicular to the parallel dikes. Therefore, parallel dikes are good indicator for estimation of slip directions. Slip direction of each slide was determined one kilometre grid in the survey area of 40 km x 20 km. The remarkable slip direction varies from Pliocene to Quaternary in the survey area. Parallel dike structure is also available for the distinguishment of the slump deposit and normal deposit on time slice images. By tracing outline of slump deposits at each depth, we identified general morphology of the overall slump deposits, and calculated the volume of the extracted slump deposits so as to estimate the scale of each event. We investigated temporal and spatial variation of depositional pattern of the slump deposits. Calculating the generation interval of the slumps, some periodicity is likely recognized, especially large slump do not occur in succession. Additionally, examining the relationship of the cumulative volume and the generation interval, certain correlation is observed in Pliocene and Quaternary. Key words: submarine landslides, 3D seismic data, Shimokita Peninsula
Reducing HIV infection among new injecting drug users in the China-Vietnam Cross Border Project.
Des Jarlais, Don C; Kling, Ryan; Hammett, Theodore M; Ngu, Doan; Liu, Wei; Chen, Yi; Binh, Kieu Thanh; Friedmann, Patricia
2007-12-01
To assess an HIV prevention programme for injecting drug users (IDU) in the crossborder area between China and Vietnam. Serial cross-sectional surveys (0, 6, 12, 18, 24 and 36 months) of community-recruited current IDU. The project included peer educator outreach and the large-scale distribution of sterile injection equipment. Serial cross-sectional surveys with HIV testing of community recruited IDU were conducted at baseline (before implementation) and 6, 12, 18, 24 and 36 months post-baseline. HIV prevalence and estimated HIV incidence among new injectors (individuals injecting drugs for < 3 years) in each survey wave were the primary outcome measures. The percentages of new injectors among all subjects declined across each survey waves in both Ning Ming and Lang Son. HIV prevalence and estimated incidence fell by approximately half at the 24-month survey and by approximately three quarters at the 36-month survey in both areas (all P < 0.01). The implementation of large-scale outreach and syringe access programmes was followed by substantial reductions in HIV infection among new injectors, with no evidence of any increase in individuals beginning to inject drugs. This project may serve as a model for large-scale HIV prevention programming for IDU in China, Vietnam, and other developing/transitional countries.
Fire extinguishing tests -80 with methyl alcohol gasoline (in MIXED)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmstedt, G.; Ryderman, A.; Carlsson, B.
1980-01-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15more » fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results, and performances with small pools can hardly be correlated with results from large scale fires.« less
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Microwave evidence for large-scale changes associated with a filament eruption
NASA Technical Reports Server (NTRS)
Kundu, M. R.; Schmahl, E. J.; Fu, Q.-J.
1989-01-01
VLA observations at 6 and 20 cm wavelengths taken on August 3, 1985 are presented, showing an eruptive filament event in which microwave emission originated in two widely separated regions during the disintegration of the filament. The amount of heat required for the enhancement is estimated. Near-simultaneous changes in intensity and polarization were observed in the western components of the northern and southern regions. It is suggested that large-scale magnetic interconnections permitted the two regions to respond similarly to an external energy or mass source involved in the disruption of the filament.
White fir stands killed by tussock moth...70-mm. color photography aids detection
Steven L. Wert; Boyd E. Wickman
1968-01-01
The use of large-scale 70 mm. aerial photography proved to be an effective technique for detecting trees in white fir stands killed by Douglas-fir tussock moth in northeastern California. Correlations between ground and photo estimates of dead trees were high. But correlations between such estimates of lesser degrees of tree damage--thin tops and topkill--were much...
Matthew B. Russell; Christopher W. Woodall; Shawn Fraver; Anthony W. D' Amato
2013-01-01
Large-scale inventories of downed woody debris (DWD; downed dead wood of a minimum size) often record decay status by assigning pieces to classes of decay according to their visual/structural attributes (e.g., presence of branches, log shape, and texture and color of wood). DWD decay classes are not only essential for estimating current DWD biomass and carbon stocks,...
Probing dark energy with lensing magnification in photometric surveys.
Schneider, Michael D
2014-02-14
I present an estimator for the angular cross correlation of two tracers of the cosmological large-scale structure that utilizes redshift information to isolate separate physical contributions. The estimator is derived by solving the Limber equation for a reweighting of the foreground tracer that nulls either clustering or lensing contributions to the cross correlation function. Applied to future photometric surveys, the estimator can enhance the measurement of gravitational lensing magnification effects to provide a competitive independent constraint on the dark energy equation of state.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
NASA Astrophysics Data System (ADS)
Welle, Paul D.; Mauter, Meagan S.
2017-09-01
This work introduces a generalizable approach for estimating the field-scale agricultural yield losses due to soil salinization. When integrated with regional data on crop yields and prices, this model provides high-resolution estimates for revenue losses over large agricultural regions. These methods account for the uncertainty inherent in model inputs derived from satellites, experimental field data, and interpreted model results. We apply this method to estimate the effect of soil salinity on agricultural outputs in California, performing the analysis with both high-resolution (i.e. field scale) and low-resolution (i.e. county-scale) data sources to highlight the importance of spatial resolution in agricultural analysis. We estimate that soil salinity reduced agricultural revenues by 3.7 billion (1.7-7.0 billion) in 2014, amounting to 8.0 million tons of lost production relative to soil salinities below the crop-specific thresholds. When using low-resolution data sources, we find that the costs of salinization are underestimated by a factor of three. These results highlight the need for high-resolution data in agro-environmental assessment as well as the challenges associated with their integration.
The utility of estimating net primary productivity over Alaska using baseline AVHRR data
Markon, C.J.; Peterson, Kim M.
2002-01-01
Net primary productivity (NPP) is a fundamental ecological variable that provides information about the health and status of vegetation communities. The Normalized Difference Vegetation Index, or NDVI, derived from the Advanced Very High Resolution Radiometer (AVHRR) is increasingly being used to model or predict NPP, especially over large remote areas. In this article, seven seasonally based metrics calculated from a seven-year baseline NDVI dataset were used to model NPP over Alaska, USA. For each growing season, they included maximum, mean and summed NDVI, total days, product of total days and maximum NDVI, an integral estimate of NDVI and a summed product of NDVI and solar radiation. Field (plot) derived NPP estimates were assigned to 18 land cover classes from an Alaskan statewide land cover database. Linear relationships between NPP and each NDVI metric were analysed at four scales: plot, 1-km, 10-km and 20-km pixels. Results show moderate to poor relationship between any of the metrics and NPP estimates for all data sets and scales. Use of NDVI for estimating NPP may be possible, but caution is required due to data seasonality, the scaling process used and land surface heterogeneity.
NASA Astrophysics Data System (ADS)
Bewley, Thomas
2015-11-01
Accurate long-term forecasts of the path and intensity of hurricanes are imperative to protect property and save lives. Accurate estimations and forecasts of the spread of large-scale contaminant plumes, such as those from Deepwater Horizon, Fukushima, and recent volcanic eruptions in Iceland, are essential for assessing environment impact, coordinating remediation efforts, and in certain cases moving folks out of harm's way. The challenges in estimating and forecasting such systems include: (a) environmental flow modeling, (b) high-performance real-time computing, (c) assimilating measured data into numerical simulations, and (d) acquiring in-situ data, beyond what can be measured from satellites, that is maximally relevant for reducing forecast uncertainty. This talk will focus on new techniques for addressing (c) and (d), namely, data assimilation and adaptive observation, in both hurricanes and large-scale environmental plumes. In particular, we will present a new technique for the energy-efficient coordination of swarms of sensor-laden balloons for persistent, in-situ, distributed, real-time measurement of developing hurricanes, leveraging buoyancy control only (coupled with the predictable and strongly stratified flowfield within the hurricane). Animations of these results are available at http://flowcontrol.ucsd.edu/3dhurricane.mp4 and http://flowcontrol.ucsd.edu/katrina.mp4. We also will survey our unique hybridization of the venerable Ensemble Kalman and Variational approaches to large-scale data assimilation in environmental flow systems, and how essentially the dual of this hybrid approach may be used to solve the adaptive observation problem in a uniquely effective and rigorous fashion.
A Bayesian Estimate of the CMB-Large-scale Structure Cross-correlation
NASA Astrophysics Data System (ADS)
Moura-Santos, E.; Carvalho, F. C.; Penna-Lima, M.; Novaes, C. P.; Wuensche, C. A.
2016-08-01
Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs-Wolfe (ISW) effect, I.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB-LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe (WMAP9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
Shandas, Vivek; Voelkel, Jackson; Rao, Meenakshi; George, Linda
2016-01-01
Reducing exposure to degraded air quality is essential for building healthy cities. Although air quality and population vary at fine spatial scales, current regulatory and public health frameworks assess human exposures using county- or city-scales. We build on a spatial analysis technique, dasymetric mapping, for allocating urban populations that, together with emerging fine-scale measurements of air pollution, addresses three objectives: (1) evaluate the role of spatial scale in estimating exposure; (2) identify urban communities that are disproportionately burdened by poor air quality; and (3) estimate reduction in mobile sources of pollutants due to local tree-planting efforts using nitrogen dioxide. Our results show a maximum value of 197% difference between cadastrally-informed dasymetric system (CIDS) and standard estimations of population exposure to degraded air quality for small spatial extent analyses, and a lack of substantial difference for large spatial extent analyses. These results provide the foundation for improving policies for managing air quality, and targeting mitigation efforts to address challenges of environmental justice. PMID:27527205
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
Large-scale structure from cosmic-string loops in a baryon-dominated universe
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1988-01-01
The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Deployment dynamics and control of large-scale flexible solar array system with deployable mast
NASA Astrophysics Data System (ADS)
Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping
2016-10-01
In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.
1975-01-01
The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.
Constant Stress Drop Fits Earthquake Surface Slip-Length Data
NASA Astrophysics Data System (ADS)
Shaw, B. E.
2011-12-01
Slip at the surface of the Earth provides a direct window into the earthquake source. A longstanding controversy surrounds the scaling of average surface slip with rupture length, which shows the puzzling feature of continuing to increase with rupture length for lengths many times the seismogenic width. Here we show that a more careful treatment of how ruptures transition from small circular ruptures to large rectangular ruptures combined with an assumption of constant stress drop provides a new scaling law for slip versus length which (1) does an excellent job fitting the data, (2) gives an explanation for the large crossover lengthscale at which slip begins to saturate, and (3) supports constant stress drop scaling which matches that seen for small earthquakes. We additionally discuss how the new scaling can be usefully applied to seismic hazard estimates.
Large scale systems : a study of computer organizations for air traffic control applications.
DOT National Transportation Integrated Search
1971-06-01
Based on current sizing estimates and tracking algorithms, some computer organizations applicable to future air traffic control computing systems are described and assessed. Hardware and software problem areas are defined and solutions are outlined.
Fan, Jessie X; Hanson, Heidi A; Zick, Cathleen D; Brown, Barbara B; Kowaleski-Jones, Lori; Smith, Ken R
2014-08-19
Empirical studies of the association between neighbourhood food environments and individual obesity risk have found mixed results. One possible cause of these mixed findings is the variation in neighbourhood geographic scale used. The purpose of this paper was to examine how various neighbourhood geographic scales affected the estimated relationship between food environments and obesity risk. Cross-sectional secondary data analysis. Salt Lake County, Utah, USA. 403,305 Salt Lake County adults 25-64 in the Utah driver license database between 1995 and 2008. Utah driver license data were geo-linked to 2000 US Census data and Dun & Bradstreet business data. Food outlets were classified into the categories of large grocery stores, convenience stores, limited-service restaurants and full-service restaurants, and measured at four neighbourhood geographic scales: Census block group, Census tract, ZIP code and a 1 km buffer around the resident's house. These measures were regressed on individual obesity status using multilevel random intercept regressions. Obesity. Food environment was important for obesity but the scale of the relevant neighbourhood differs for different type of outlets: large grocery stores were not significant at all four geographic scales, limited-service restaurants at the medium-to-large scale (Census tract or larger) and convenience stores and full-service restaurants at the smallest scale (Census tract or smaller). The choice of neighbourhood geographic scale can affect the estimated significance of the association between neighbourhood food environments and individual obesity risk. However, variations in geographic scale alone do not explain the mixed findings in the literature. If researchers are constrained to use one geographic scale with multiple categories of food outlets, using Census tract or 1 km buffer as the neighbourhood geographic unit is likely to allow researchers to detect most significant relationships. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
Vanderborght, Jan; Vereecken, Harry
2002-01-01
The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
How Big is Too Big for Hubs: Marginal Profitability in Hub-and-Spoke Networks
NASA Technical Reports Server (NTRS)
Ross, Leola B.; Schmidt, Stephen J.
1997-01-01
Increasing the scale of hub operations at major airports has led to concerns about congestion at excessively large hubs. In this paper, we estimate the marginal cost of adding spokes to an existing hub network. We observe entry/non-entry decisions on potential spokes from existing hubs, and estimate both a variable profit function for providing service in markets using that spoke as well as the fixed costs of providing service to the spoke. We let the fixed costs depend upon the scale of operations at the hub, and find the hub size at which spoke service costs are minimized.
NASA Astrophysics Data System (ADS)
Straus, D. M.
2006-12-01
The transitions between portions of the state space of the large-scale flow is studied from daily wintertime data over the Pacific North America region using the NCEP reanalysis data set (54 winters) and very large suites of hindcasts made with the COLA atmospheric GCM with observed SST (55 members for each of 18 winters). The partition of the large-scale state space is guided by cluster analysis, whose statistical significance and relationship to SST is reviewed (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). The determination of the global nature of the flow through state space is studied using Markov Chains (Crommelin, 2004). In particular the non-diffusive part of the flow is contrasted in nature (small data sample) and the AGCM (large data sample). The intrinsic error growth associated with different portions of the state space is studied through sets of identical twin AGCM simulations. The goal is to obtain realistic estimates of predictability times for large-scale transitions that should be useful in long-range forecasting.
NASA Astrophysics Data System (ADS)
Song, Dawei; Ponte Castañeda, P.
2018-06-01
We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.
NASA Astrophysics Data System (ADS)
Whidden, E.; Roulet, N.
2003-04-01
Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.
Cost estimate for a proposed GDF Suez LNG testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.
2014-02-01
At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire,more » and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.« less
Carbon storage in Chinese grassland ecosystems: Influence of different integrative methods.
Ma, Anna; He, Nianpeng; Yu, Guirui; Wen, Ding; Peng, Shunlei
2016-02-17
The accurate estimate of grassland carbon (C) is affected by many factors at the large scale. Here, we used six methods (three spatial interpolation methods and three grassland classification methods) to estimate C storage of Chinese grasslands based on published data from 2004 to 2014, and assessed the uncertainty resulting from different integrative methods. The uncertainty (coefficient of variation, CV, %) of grassland C storage was approximately 4.8% for the six methods tested, which was mainly determined by soil C storage. C density and C storage to the soil layer depth of 100 cm were estimated to be 8.46 ± 0.41 kg C m(-2) and 30.98 ± 1.25 Pg C, respectively. Ecosystem C storage was composed of 0.23 ± 0.01 (0.7%) above-ground biomass, 1.38 ± 0.14 (4.5%) below-ground biomass, and 29.37 ± 1.2 (94.8%) Pg C in the 0-100 cm soil layer. Carbon storage calculated by the grassland classification methods (18 grassland types) was closer to the mean value than those calculated by the spatial interpolation methods. Differences in integrative methods may partially explain the high uncertainty in C storage estimates in different studies. This first evaluation demonstrates the importance of multi-methodological approaches to accurately estimate C storage in large-scale terrestrial ecosystems.
Geometry of a large-scale, low-angle, midcrustal thrust (Woodroffe Thrust, central Australia)
NASA Astrophysics Data System (ADS)
Wex, S.; Mancktelow, N. S.; Hawemann, F.; Camacho, A.; Pennacchioni, G.
2017-11-01
The Musgrave Block in central Australia exposes numerous large-scale mylonitic shear zones developed during the intracontinental Petermann Orogeny around 560-520 Ma. The most prominent structure is the crustal-scale, over 600 km long, E-W trending Woodroffe Thrust, which is broadly undulate but generally dips shallowly to moderately to the south and shows an approximately top-to-north sense of movement. The estimated metamorphic conditions of mylonitization indicate a regional variation from predominantly midcrustal (circa 520-620°C and 0.8-1.1 GPa) to lower crustal ( 650°C and 1.0-1.3 GPa) levels in the direction of thrusting, which is also reflected in the distribution of preserved deformation microstructures. This variation in metamorphic conditions is consistent with a south dipping thrust plane but is only small, implying that a ≥60 km long N-S segment of the Woodroffe Thrust was originally shallowly dipping at an average estimated angle of ≤6°. The reconstructed geometry suggests that basement-cored, thick-skinned, midcrustal thrusts can be very shallowly dipping on a scale of many tens of kilometers in the direction of movement. Such a geometry would require the rocks along the thrust to be weak, but field observations (e.g., large volumes of syntectonic pseudotachylyte) argue for a strong behavior, at least transiently. Localization on a low-angle, near-planar structure that crosscuts lithological layers requires a weak precursor, such as a seismic rupture in the middle to lower crust. If this was a single event, the intracontinental earthquake must have been large, with the rupture extending laterally over hundreds of kilometers.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Replica and extreme-value analysis of the Jarzynski free-energy estimator
NASA Astrophysics Data System (ADS)
Palassini, Matteo; Ritort, Felix
2008-03-01
We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.
Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers
NASA Technical Reports Server (NTRS)
Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)
1996-01-01
Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
[A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].
Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng
2006-09-01
The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.
NASA Astrophysics Data System (ADS)
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Takada, Masahiro
2018-05-01
Stacked lensing is a powerful means of measuring the average mass distribution around large-scale structure tracers. There are two stacked lensing estimators used in the literature, denoted as ΔΣ and γ+, which are related as ΔΣ = Σcrγ+, where Σcr(zl, zs) is the critical surface mass density for each lens-source pair (zl and zs are lens and source redshifts, respectively). In this paper we derive a formula for the covariance matrix of ΔΣ-estimator focusing on "weight" function to improve the signal-to-noise (S/N). We assume that the lensing fields and the distribution of lensing objects obey the Gaussian statistics. With this formula, we show that, if background galaxy shapes are weighted by an amount of Σ _cr^{-2}(z_l,z_s), the ΔΣ-estimator maximizes the S/N in the shot noise limited regime. We also show that the ΔΣ-estimator with the weight Σ _cr^{-2} gives a greater (S/N)2 than that of the γ+-estimator by about 5-25% for lensing objects at redshifts comparable with or higher than the median of source galaxy redshifts for hypothetical Subaru HSC and DES surveys. However, for low-redshift lenses such as zl ≲ 0.3, the γ+-estimator has higher (S/N)2 than ΔΣ. We also discuss that the (S/N)2 for ΔΣ at large separations in the sample variance limited regime can be boosted, by up to a factor of 1.5, if one adopts a weight of Σ _cr^{-α } with α > 2. Our formula allows one to explore how the combination of the different estimators can approach an optimal estimator in all regimes of redshifts and separation scales.
Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.
Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre
2006-01-01
Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.
Complexity as a Factor of Quality and Cost in Large Scale Software Development.
1979-12-01
allocating testing resources." [69 69I V. THE ROLE OF COMPLEXITY IN RESOURCE ESTIMATION AND ALLOCATION A. GENERAL It can be argued that blame for the...and allocation of testing resource by - identifying independent substructures and - identifying heavily used logic paths. 2. Setting a Design Threshold... RESOURCE ESTIMATION -------- 70 1. New Dynamic Field ------------------------- 70 2. Quality and Testing ----------------------- 71 3. Programming Units of
Temporal transferability of soil moisture calibration equations
USDA-ARS?s Scientific Manuscript database
Several large-scale field campaigns have been conducted over the last 20 years that require accurate estimates of soil moisture conditions. These measurements are manually conducted using soil moisture probes which require calibration. The calibration process involves the collection of hundreds of...
NASA Astrophysics Data System (ADS)
Feng, S.; Lauvaux, T.; Keller, K.; Davis, K. J.
2016-12-01
Current estimates of biogenic carbon fluxes over North America based on top-down atmospheric inversions are subject to considerable uncertainty. This uncertainty stems to a large part from the uncertain prior fluxes estimates with the associated error covariances and approximations in the atmospheric transport models that link observed carbon dioxide mixing ratios with surface fluxes. Specifically, approximations in the representation of vertical mixing associated with atmospheric turbulence or convective transport and largely under-determined prior fluxes and their error structures significantly hamper our capacity to reliably estimate regional carbon fluxes. The Atmospheric Carbon and Transport - America (ACT-America) mission aims at reducing the uncertainties in inverse fluxes at the regional-scale by deploying airborne and ground-based platforms to characterize atmospheric GHG mixing ratios and the concurrent atmospheric dynamics. Two aircraft measure the 3-dimensional distribution of greenhouse gases at synoptic scales, focusing on the atmospheric boundary layer and the free troposphere during both fair and stormy weather conditions. Here we analyze two main questions: (i) What level of information can we expect from the currently planned observations? (ii) How might ACT-America reduce the hindcast and predictive uncertainty of carbon estimates over North America?
Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.
Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardis, F. De; Vavagiakis, E.M.; Niemack, M.D.
We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrixmore » of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less
NASA Technical Reports Server (NTRS)
De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.;
2017-01-01
We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.
NASA Astrophysics Data System (ADS)
De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; Coughlin, K.; Datta, R.; Devlin, M.; Dunkley, J.; Dunner, R.; Ferraro, S.; Fox, A.; Gallardo, P. A.; Halpern, M.; Hand, N.; Hasselfield, M.; Henderson, S. W.; Hill, J. C.; Hilton, G. C.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A.; Li, D.; Louis, T.; Lungu, M.; Madhavacheril, M. S.; Maurin, L.; McMahon, J.; Moodley, K.; Naess, S.; Nati, F.; Newburgh, L.; Nibarger, J. P.; Page, L. A.; Partridge, B.; Schaan, E.; Schmitt, B. L.; Sehgal, N.; Sievers, J.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R. J.; van Engelen, A.; Van Lanen, J.; Wollack, E. J.
2017-03-01
We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.
Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.
2013-01-01
Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.
Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.; ...
2017-03-07
Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less
Stereoscopic perception of real depths at large distances.
Palmisano, Stephen; Gillam, Barbara; Govan, Donovan G; Allison, Robert S; Harris, Julie M
2010-06-01
There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.
A Networked Sensor System for the Analysis of Plot-Scale Hydrology.
Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W; Navarro, Miguel; Li, Yimei; Slater, Thomas A; Liang, Yao; Liang, Xu
2017-03-20
This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments.
A Networked Sensor System for the Analysis of Plot-Scale Hydrology
Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W.; Navarro, Miguel; Li, Yimei; Slater, Thomas A.; Liang, Yao; Liang, Xu
2017-01-01
This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments. PMID:28335534
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
NASA Astrophysics Data System (ADS)
Flinchum, B. A.; Holbrook, W. S.; Grana, D.; Parsekian, A.; Carr, B.; Jiao, J.
2017-12-01
Porosity is generated by chemical, physical and biological processes that work to transform bedrock into soil. The resulting porosity structure can provide specifics about these processes and can improve understanding groundwater storage in the deep critical zone. Near-surface geophysical methods, when combined with rock physics and drilling, can be a tool used to map porosity over large spatial scales. In this study, we estimate porosity in three-dimensions (3D) across a 58 Ha granite catchment. Observations focus on seismic refraction, downhole nuclear magnetic resonance logs, downhole sonic logs, and samples of core acquired by push coring. We use a novel petrophysical approach integrating two rock physics models, a porous medium for the saprolite and a differential effective medium for the fractured rock, that drive a Bayesian inversion to calculate porosity from seismic velocities. The inverted geophysical porosities are within about 0.05 m3/m3 of lab measured values. We extrapolate the porosity estimates below seismic refraction lines to a 3D volume using ordinary kriging to map the distribution of porosity in 3D up to depths of 80 m. This study provides a unique map of porosity on scale never-before-seen in critical zone science. Estimating porosity on these large spatial scales opens the door for improving and understanding the processes that shape the deep critical zone.
NASA Astrophysics Data System (ADS)
Tian, Siyuan; Tregoning, Paul; Renzullo, Luigi J.; van Dijk, Albert I. J. M.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.; Allgeyer, Sébastien
2017-03-01
The accuracy of global water balance estimates is limited by the lack of observations at large scale and the uncertainties of model simulations. Global retrievals of terrestrial water storage (TWS) change and soil moisture (SM) from satellites provide an opportunity to improve model estimates through data assimilation. However, combining these two data sets is challenging due to the disparity in temporal and spatial resolution at both vertical and horizontal scale. For the first time, TWS observations from the Gravity Recovery and Climate Experiment (GRACE) and near-surface SM observations from the Soil Moisture and Ocean Salinity (SMOS) were jointly assimilated into a water balance model using the Ensemble Kalman Smoother from January 2010 to December 2013 for the Australian continent. The performance of joint assimilation was assessed against open-loop model simulations and the assimilation of either GRACE TWS anomalies or SMOS SM alone. The SMOS-only assimilation improved SM estimates but reduced the accuracy of groundwater and TWS estimates. The GRACE-only assimilation improved groundwater estimates but did not always produce accurate estimates of SM. The joint assimilation typically led to more accurate water storage profile estimates with improved surface SM, root-zone SM, and groundwater estimates against in situ observations. The assimilation successfully downscaled GRACE-derived integrated water storage horizontally and vertically into individual water stores at the same spatial scale as the model and SMOS, and partitioned monthly averaged TWS into daily estimates. These results demonstrate that satellite TWS and SM measurements can be jointly assimilated to produce improved water balance component estimates.
On identifying relationships between the flood scaling exponent and basin attributes.
Medhi, Hemanta; Tripathi, Shivam
2015-07-01
Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.
Perry, Joe N; Devos, Yann; Arpaia, Salvatore; Bartsch, Detlef; Ehlert, Christina; Gathmann, Achim; Hails, Rosemary S; Hendriksen, Niels B; Kiss, Jozsef; Messéan, Antoine; Mestdagh, Sylvie; Neemann, Gerd; Nuti, Marco; Sweet, Jeremy B; Tebbe, Christoph C
2012-01-01
In farmland biodiversity, a potential risk to the larvae of non-target Lepidoptera from genetically modified (GM) Bt-maize expressing insecticidal Cry1 proteins is the ingestion of harmful amounts of pollen deposited on their host plants. A previous mathematical model of exposure quantified this risk for Cry1Ab protein. We extend this model to quantify the risk for sensitive species exposed to pollen containing Cry1F protein from maize event 1507 and to provide recommendations for management to mitigate this risk. A 14-parameter mathematical model integrating small- and large-scale exposure was used to estimate the larval mortality of hypothetical species with a range of sensitivities, and under a range of simulated mitigation measures consisting of non-Bt maize strips of different widths placed around the field edge. The greatest source of variability in estimated mortality was species sensitivity. Before allowance for effects of large-scale exposure, with moderate within-crop host-plant density and with no mitigation, estimated mortality locally was <10% for species of average sensitivity. For the worst-case extreme sensitivity considered, estimated mortality locally was 99·6% with no mitigation, although this estimate was reduced to below 40% with mitigation of 24-m-wide strips of non-Bt maize. For highly sensitive species, a 12-m-wide strip reduced estimated local mortality under 1·5%, when within-crop host-plant density was zero. Allowance for large-scale exposure effects would reduce these estimates of local mortality by a highly variable amount, but typically of the order of 50-fold. Mitigation efficacy depended critically on assumed within-crop host-plant density; if this could be assumed negligible, then the estimated effect of mitigation would reduce local mortality below 1% even for very highly sensitive species. Synthesis and applications. Mitigation measures of risks of Bt-maize to sensitive larvae of non-target lepidopteran species can be effective, but depend on host-plant densities which are in turn affected by weed-management regimes. We discuss the relevance for management of maize events where cry1F is combined (stacked) with a herbicide-tolerance trait. This exemplifies how interactions between biota may occur when different traits are stacked irrespective of interactions between the proteins themselves and highlights the importance of accounting for crop management in the assessment of the ecological impact of GM plants. PMID:22496596
NASA Astrophysics Data System (ADS)
Zhang, Bowen; Tian, Hanqin; Lu, Chaoqun; Chen, Guangsheng; Pan, Shufen; Anderson, Christopher; Poulter, Benjamin
2017-09-01
A wide range of estimates on global wetland methane (CH4) fluxes has been reported during the recent two decades. This gives rise to urgent needs to clarify and identify the uncertainty sources, and conclude a reconciled estimate for global CH4 fluxes from wetlands. Most estimates by using bottom-up approach rely on wetland data sets, but these data sets show largely inconsistent in terms of both wetland extent and spatiotemporal distribution. A quantitative assessment of uncertainties associated with these discrepancies among wetland data sets has not been well investigated yet. By comparing the five widely used global wetland data sets (GISS, GLWD, Kaplan, GIEMS and SWAMPS-GLWD), it this study, we found large differences in the wetland extent, ranging from 5.3 to 10.2 million km2, as well as their spatial and temporal distributions among the five data sets. These discrepancies in wetland data sets resulted in large bias in model-estimated global wetland CH4 emissions as simulated by using the Dynamic Land Ecosystem Model (DLEM). The model simulations indicated that the mean global wetland CH4 emissions during 2000-2007 were 177.2 ± 49.7 Tg CH4 yr-1, based on the five different data sets. The tropical regions contributed the largest portion of estimated CH4 emissions from global wetlands, but also had the largest discrepancy. Among six continents, the largest uncertainty was found in South America. Thus, the improved estimates of wetland extent and CH4 emissions in the tropical regions and South America would be a critical step toward an accurate estimate of global CH4 emissions. This uncertainty analysis also reveals an important need for our scientific community to generate a global scale wetland data set with higher spatial resolution and shorter time interval, by integrating multiple sources of field and satellite data with modeling approaches, for cross-scale extrapolation.
Retrieving Baseflow from SWOT Mission
NASA Astrophysics Data System (ADS)
Baratelli, F.; Flipo, N.; Biancamaria, S.; Rivière, A.
2017-12-01
The quantification of aquifer contribution to river discharge is of primary importance to evaluate the impact of climatic and anthropogenic stresses on the availability of water resources. Several baseflow estimation methods require river discharge measurements, which can be difficult to obtain at high spatio-temporal resolution for large scale basins. The SWOT satellite mission will provide discharge estimations for large rivers (50 - 100 m wide) even in remote basins. The frequency of these estimations depends on the position and ranges from zero to four values in the 21-days satellite cycle. This work aims at answering the following question: can baseflow be estimated from SWOT observations during the mission lifetime? An algorithm based on hydrograph separation by Chapman's filter was developed to automatically estimate the baseflow in a river network at regional or larger scale (> 10000 km2). The algorithm was first applied using the discharge time series simulated at daily time step by a coupled hydrological-hydrogeological model to obtain the reference baseflow estimations. The same algorithm is then forced with discharge time series sampled at SWOT observation frequency. The methodology was applied to the Seine River basin (65000 km2, France). The results show that the average baseflow is estimated with good accuracy for all the reaches which are observed at least once per cycle (relative bias less than 4%). The time evolution of baseflow is also rather well retrieved, with a Nash coefficient which is more than 0.7 for 94% of the network length. This work provides new potential for the SWOT mission in terms of global hydrological analysis.
Menon, Purnima; McDonald, Christine M; Chakrabarti, Suman
2016-05-01
India's national nutrition and health programmes are largely designed to provide evidence-based nutrition-specific interventions, but intervention coverage is low due to a combination of implementation challenges, capacity and financing gaps. Global cost estimates for nutrition are available but national and subnational costs are not. We estimated national and subnational costs of delivering recommended nutrition-specific interventions using the Scaling Up Nutrition (SUN) costing approach. We compared costs of delivering the SUN interventions at 100% scale with those of nationally recommended interventions. Target populations (TP) for interventions were estimated using national population and nutrition data. Unit costs (UC) were derived from programmatic data. The cost of delivering an intervention at 100% coverage was calculated as (UC*projected TP). Cost estimates varied; estimates for SUN interventions were lower than estimates for nationally recommended interventions because of differences in choice of intervention, target group or unit cost. US$5.9bn/year are required to deliver a set of nationally recommended nutrition interventions at scale in India, while US$4.2bn are required for the SUN interventions. Cash transfers (49%) and food supplements (40%) contribute most to costs of nationally recommended interventions, while food supplements to prevent and treat malnutrition contribute most to the SUN costs. We conclude that although such costing is useful to generate broad estimates, there is an urgent need for further costing studies on the true unit costs of the delivery of nutrition-specific interventions in different local contexts to be able to project accurate national and subnational budgets for nutrition in India. © 2016 The Authors. Maternal & Child Nutrition published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Meridional overturning and large-scale circulation of the Indian Ocean
NASA Astrophysics Data System (ADS)
Ganachaud, Alexandre; Wunsch, Carl; Marotzke, Jochem; Toole, John
2000-11-01
The large scale Indian Ocean circulation is estimated from a global hydrographic inverse geostrophic box model with a focus on the meridional overturning circulation (MOC). The global model is based on selected recent World Ocean Circulation Experiment (WOCE) sections which in the Indian Basin consist of zonal sections at 32°S, 20°S and 8°S, and a section between Bali and Australia from the Java-Australia Dynamic Experiment (JADE). The circulation is required to conserve mass, salinity, heat, silica and "PO" (170PO4+O2). Near-conservation is imposed within layers bounded by neutral surfaces, while permitting advective and diffusive exchanges between the layers. Conceptually, the derived circulation is an estimate of the average circulation for the period 1987-1995. A deep inflow into the Indian Basin of 11±4 Sv is found, which is in the lower range of previous estimates, but consistent with conservation requirements and the global data set. The Indonesian Throughflow (ITF) is estimated at 15±5 Sv. The flow in the Mozambique Channel is of the same magnitude, implying a weak net flow between Madagascar and Australia. A net evaporation of -0.6±0.4 Sv is found between 32°S and 8°S, consistent with independent estimates. No net heat gain is found over the Indian Basin (0.1 ± 0.2PW north of 32°S) as a consequence of the large warm water influx from the ITF. Through the use of anomaly equations, the average dianeutral upwelling and diffusion between the sections are required and resolved, with values in the range 1-3×10-5 cm s-1 for the upwelling and 2-10 cm2 s-1 for the diffusivity.
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
On the use of a physically-based baseflow timescale in land surface models.
NASA Astrophysics Data System (ADS)
Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.
2017-12-01
Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.
NASA Astrophysics Data System (ADS)
Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly
2010-05-01
Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.
Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey
2012-01-01
The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Vavagiakis, Eve Marie; De Bernardis, Francesco; Aiola, Simone; Battaglia, Nicholas; Niemack, Michael D.; ACTPol Collaboration
2017-06-01
We have made improved measurements of the kinematic Sunyaev-Zel’dovich (kSZ) effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). We used a map of the Cosmic Microwave Background (CMB) from two seasons of observations each by ACT and the Atacama Cosmology Telescope Polarimeter (ACTPol) receiver. We evaluated the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog via 600 square degrees of overlapping sky area. The measurement of the kSZ signal arising from the large-scale motions of clusters was made by fitting data to an analytical model. The free parameter of the fit determined the optical depth to microwave photon scattering for the cluster sample. We estimated the covariance matrix of the mean pairwise momentum as a function of galaxy separation using CMB simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based uncertainties gave signal-to-noise estimates between 3.6 and 4.1 for various luminosity cuts. Additionally, we explored a novel approach to estimating cluster optical depths from the average thermal Sunyaev-Zel’dovich (tSZ) signal at the BOSS DR11 catalog positions. Our results were broadly consistent with those obtained from the kSZ signal. In the future, the tSZ signal may provide a valuable probe of cluster optical depths, enabling the extraction of velocities from the kSZ sourced mean pairwise momenta. New CMB maps from three seasons of ACTPol observations with multi-frequency coverage overlap with nearly four times as many DR11 sources and promise to improve statistics and systematics for SZ measurements. With these and other upcoming data, the pairwise kSZ signal is poised to become a powerful new cosmological tool, able to probe large physical scales to inform neutrino physics and test models of modified gravity and dark energy.
The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.
1985-01-01
Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.
NASA Astrophysics Data System (ADS)
Doo, Steve S.; Hamylton, Sarah; Finfer, Joshua; Byrne, Maria
2017-03-01
Large benthic foraminifera (LBFs) are a vital component of coral reef carbonate production, often overlooked due to their small size. These super-abundant calcifiers are crucial to reef calcification by generation of lagoon and beach sands. Reef-scale carbonate production by LBFs is not well understood, and seasonal fluctuations in this important process are largely unquantified. The biomass of five LBF species in their algal flat habitat was quantified in the austral winter (July 2013), spring (October 2013), and summer (February 2014) at One Tree Reef. WorldView-2 satellite images were used to characterize and create LBF habitat maps based on ground-referenced photographs of algal cover. Habitat maps and LBF biomass measurements were combined to estimate carbonate storage across the entire reef flat. Total carbonate storage of LBFs on the reef flat ranged from 270 tonnes (winter) to 380 tonnes (summer). Satellite images indicate that the habitat area used by LBFs ranged from 0.6 (winter) to 0.71 km2 (spring) of a total possible area of 0.96 km2. LBF biomass was highest in the winter when algal habitat area was lowest, but total carbonate storage was the highest in the summer, when algal habitat area was intermediate. Our data suggest that biomass measurements alone do not capture total abundance of LBF populations (carbonate storage), as the area of available habitat is variable. These results suggest LBF carbonate production studies that measure biomass in discrete locations and single time points fail to capture accurate reef-scale production by not incorporating estimates of the associated algal habitat. Reef-scale measurements in this study can be incorporated into carbonate production models to determine the role of LBFs in sedimentary landforms (lagoons, beaches, etc.). Based on previous models of entire reef metabolism, our estimates indicate that LBFs contribute approximately 3.9-5.4% of reef carbonate budgets, a previously underappreciated carbon sink.
Wang, WeiBo; Sun, Wei; Wang, Wei; Szatkiewicz, Jin
2018-03-01
The application of high-throughput sequencing in a broad range of quantitative genomic assays (e.g., DNA-seq, ChIP-seq) has created a high demand for the analysis of large-scale read-count data. Typically, the genome is divided into tiling windows and windowed read-count data is generated for the entire genome from which genomic signals are detected (e.g. copy number changes in DNA-seq, enrichment peaks in ChIP-seq). For accurate analysis of read-count data, many state-of-the-art statistical methods use generalized linear models (GLM) coupled with the negative-binomial (NB) distribution by leveraging its ability for simultaneous bias correction and signal detection. However, although statistically powerful, the GLM+NB method has a quadratic computational complexity and therefore suffers from slow running time when applied to large-scale windowed read-count data. In this study, we aimed to speed up substantially the GLM+NB method by using a randomized algorithm and we demonstrate here the utility of our approach in the application of detecting copy number variants (CNVs) using a real example. We propose an efficient estimator, the randomized GLM+NB coefficients estimator (RGE), for speeding up the GLM+NB method. RGE samples the read-count data and solves the estimation problem on a smaller scale. We first theoretically validated the consistency and the variance properties of RGE. We then applied RGE to GENSENG, a GLM+NB based method for detecting CNVs. We named the resulting method as "R-GENSENG". Based on extensive evaluation using both simulated and empirical data, we concluded that R-GENSENG is ten times faster than the original GENSENG while maintaining GENSENG's accuracy in CNV detection. Our results suggest that RGE strategy developed here could be applied to other GLM+NB based read-count analyses, i.e. ChIP-seq data analysis, to substantially improve their computational efficiency while preserving the analytic power.
The cross-over to magnetostrophic convection in planetary dynamo systems
King, E. M.
2017-01-01
Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338
The cross-over to magnetostrophic convection in planetary dynamo systems.
Aurnou, J M; King, E M
2017-03-01
Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.
Estimating ecosystem service changes as a precursor to modeling
EPA's Future Midwestern Landscapes Study will project changes in ecosystem services (ES) for alternative future policy scenarios in the Midwestern U.S. Doing so for detailed landscapes over large spatial scales will require serial application of economic and ecological models. W...
Thirty Years of Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
2001-01-01
Discusses relationships between a mathematical measurement model and its real-world applications. Makes a distinction between large-scale data matrices commonly found in educational measurement and smaller matrices found in attitude and personality measurement. Also evaluates nonparametric methods for estimating item response functions and…
Epigenetic supersimilarity of monozygotic twin pairs
USDA-ARS?s Scientific Manuscript database
Monozygotic twins have long been studied to estimate heritability and explore epigenetic influences on phenotypic variation. The phenotypic and epigenetic similarities of monozygotic twins have been assumed to be largely due to their genetic identity. Here, by analyzing data from a genome-scale stud...
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
Generalized scaling of seasonal thermal stratification in lakes
NASA Astrophysics Data System (ADS)
Shatwell, T.; Kirillin, G.
2016-12-01
The mixing regime is fundamental to the biogeochemisty and ecology of lakes because it determines the vertical transport of matter such as gases, nutrients, and organic material. Whereas shallow lakes are usually polymictic and regularly mix to the bottom, deep lakes tend to stratify seasonally, separating surface water from deep sediments and deep water from the atmosphere. Although empirical relationships exist to predict the mixing regime, a physically based, quantitative criterion is lacking. Here we review our recent research on thermal stratification in lakes at the transition between polymictic and stratified regimes. Using the mechanistic balance between potential and kinetic energy in terms of the Richardson number, we derive a generalized physical scaling for seasonal stratification in a closed lake basin. The scaling parameter is the critical mean basin depth that delineates polymictic and seasonally stratified lakes based on lake water transparency (Secchi depth), lake length, and an annual mean estimate for the Monin-Obukhov length. We validated the scaling on available data of 374 global lakes using logistic regression and found it to perform better than other criteria including a conventional open basin scaling or a simple depth threshold. The scaling has potential applications in estimating large scale greenhouse gas fluxes from lakes because the required inputs, like water transparency and basin morphology, can be acquired using the latest remote sensing technologies. The generalized scaling is universal for freshwater lakes and allows the seasonal mixing regime to be estimated without numerically solving the heat transport equations.
Two methods for estimating limits to large-scale wind power generation
Miller, Lee M.; Brunsell, Nathaniel A.; Mechem, David B.; Gans, Fabian; Monaghan, Andrew J.; Vautard, Robert; Keith, David W.; Kleidon, Axel
2015-01-01
Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 105 km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m−2, whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m−2, with VKE capturing this combination in a comparatively simple way. PMID:26305925
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
Los Alamos National Laboratory Economic Analysis Capability Overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella
Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.
NASA Astrophysics Data System (ADS)
Peidou, Athina C.; Fotopoulos, Georgia; Pagiatakis, Spiros
2017-10-01
The main focus of this paper is to assess the feasibility of utilizing dedicated satellite gravity missions in order to detect large-scale solid mass transfer events (e.g. landslides). Specifically, a sensitivity analysis of Gravity Recovery and Climate Experiment (GRACE) gravity field solutions in conjunction with simulated case studies is employed to predict gravity changes due to past subaerial and submarine mass transfer events, namely the Agulhas slump in southeastern Africa and the Heart Mountain Landslide in northwestern Wyoming. The detectability of these events is evaluated by taking into account the expected noise level in the GRACE gravity field solutions and simulating their impact on the gravity field through forward modelling of the mass transfer. The spectral content of the estimated gravity changes induced by a simulated large-scale landslide event is estimated for the known spatial resolution of the GRACE observations using wavelet multiresolution analysis. The results indicate that both the Agulhas slump and the Heart Mountain Landslide could have been detected by GRACE, resulting in {\\vert }0.4{\\vert } and {\\vert }0.18{\\vert } mGal change on GRACE solutions, respectively. The suggested methodology is further extended to the case studies of the submarine landslide in Tohoku, Japan, and the Grand Banks landslide in Newfoundland, Canada. The detectability of these events using GRACE solutions is assessed through their impact on the gravity field.
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Hartmann, Andreas; Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Karst aquifers are an important source of drinking water in many regions of the world, but their resources are likely to be affected by changes in climate and land cover. Karst areas are highly permeable and produce large amounts of groundwater recharge, while surface runoff is typically negligible. As a result, recharge in karst systems may be particularly sensitive to environmental changes compared to other less permeable systems. However, current large-scale hydrological models poorly represent karst specificities. They tend to provide an erroneous water balance and to underestimate groundwater recharge over karst areas. A better understanding of karst hydrology and estimating karst groundwater resources at a large-scale is therefore needed for guiding water management in a changing world. The first objective of the present study is to introduce explicit vegetation processes into a previously developed karst recharge model (VarKarst) to better estimate evapotranspiration losses depending on the land cover characteristics. The novelty of the approach for large-scale modelling lies in the assessment of model output uncertainty, and parameter sensitivity to avoid over-parameterisation. We find that the model so modified is able to produce simulations consistent with observations of evapotranspiration and soil moisture at Fluxnet sites located in carbonate rock areas. Secondly, we aim to determine the model sensitivities to climate and land cover characteristics, and to assess the relative influence of changes in climate and land cover on aquifer recharge. We perform virtual experiments using synthetic climate inputs, and varying the value of land cover parameters. In this way, we can control for variations in climate input characteristics (e.g. precipitation intensity, precipitation frequency) and vegetation characteristics (e.g. canopy water storage capacity, rooting depth), and we can isolate the effect that each of these quantities has on recharge. Our results show that these factors are strongly interacting and are generating non-linear responses in recharge.
NASA Astrophysics Data System (ADS)
Verdecchia, A.; Harrington, R. M.; Kirkpatrick, J. D.
2017-12-01
Many observations suggest that duration and size scale in a self-similar way for most earthquakes. Deviations from the expected scaling would suggest that some physical feature on the fault surface influences the speed of rupture differently at different length scales. Determining whether differences in scaling exist between small and large earthquakes is complicated by the fact that duration estimates of small earthquakes are often distorted by travel-path and site effects. However, when carefully estimated, scaling relationships between earthquakes may provide important clues about fault geometry and the spatial scales over which it affects fault rupture speed. The Mw 6.9, 20 August 1999, Quepos earthquake occurred on the plate boundary thrust fault along southern Costa Rica margin where the subducting seafloor is cut by numerous normal faults. The mainshock and aftershock sequence were recorded by land and (partially by) ocean bottom (OBS) seismic arrays deployed as part of the CRSEIZE experiment. Here we investigate the size-duration scaling of the mainshock and relocated aftershocks on the plate boundary to determine if a change in scaling exists that is consistent with a change in fault surface geometry at a specific length scale. We use waveforms from 5 short-period land stations and 12 broadband OBS stations to estimate corner frequencies (the inverse of duration) and seismic moment for several aftershocks on the plate interface. We first use spectral amplitudes of single events to estimate corner frequencies and seismic moments. We then adopt a spectral ratio method to correct for non-source-related effects and refine the corner frequency estimation. For the spectral ratio approach, we use pairs of earthquakes with similar waveforms (correlation coefficient > 0.7), with waveform similarity implying event co-location. Preliminary results from single spectra show similar corner frequency values among events of 0.5 ≤ M ≤ 3.6, suggesting a decrease in static stress drop with magnitude. Our next step is to refine corner frequency estimates using spectral ratios to see if the trend in corner frequency persists with small events, and to extend the magnitude range of the estimations using land-based recordings of the mainshock and two largest aftershocks, which occurred prior to the Osa array deployment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Wang, Minghuai; Ghan, Steven J.
Aerosol-cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity (ω500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strong large-scale ascendmore » (ω500 < -25 hPa/d) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is as large as that in stratocumulus regimes, which indicates that regimes with strong large-scale ascend are as important as stratocumulus regimes in studying AIE. 42" It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm/d) contributes the most to the total aerosol indirect forcing (from 64% to nearly 100%). Results show that the uncertainty in AIE is even larger within specific dynamical regimes than that globally, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
NASA Astrophysics Data System (ADS)
Mahmud, M. R.
2014-02-01
This paper presents the simplified and operational approach of mapping the water yield in tropical watershed using space-based multi sensor remote sensing data. Two main critical hydrological rainfall variables namely rainfall and evapotranspiration are being estimated by satellite measurement and reinforce the famous Thornthwaite & Mather water balance model. The satellite rainfall and ET estimates were able to represent the actual value on the ground with accuracy under considerable conditions. The satellite derived water yield had good agreement and relation with actual streamflow. A high bias measurement may result due to; i) influence of satellite rainfall estimates during heavy storm, and ii) large uncertainties and standard deviation of MODIS temperature data product. The output of this study managed to improve the regional scale of hydrology assessment in Peninsular Malaysia.
Hart-Smith, Gene; Yagoub, Daniel; Tay, Aidan P.; Pickford, Russell; Wilkins, Marc R.
2016-01-01
All large scale LC-MS/MS post-translational methylation site discovery experiments require methylpeptide spectrum matches (methyl-PSMs) to be identified at acceptably low false discovery rates (FDRs). To meet estimated methyl-PSM FDRs, methyl-PSM filtering criteria are often determined using the target-decoy approach. The efficacy of this methyl-PSM filtering approach has, however, yet to be thoroughly evaluated. Here, we conduct a systematic analysis of methyl-PSM FDRs across a range of sample preparation workflows (each differing in their exposure to the alcohols methanol and isopropyl alcohol) and mass spectrometric instrument platforms (each employing a different mode of MS/MS dissociation). Through 13CD3-methionine labeling (heavy-methyl SILAC) of Saccharomyces cerevisiae cells and in-depth manual data inspection, accurate lists of true positive methyl-PSMs were determined, allowing methyl-PSM FDRs to be compared with target-decoy approach-derived methyl-PSM FDR estimates. These results show that global FDR estimates produce extremely unreliable methyl-PSM filtering criteria; we demonstrate that this is an unavoidable consequence of the high number of amino acid combinations capable of producing peptide sequences that are isobaric to methylated peptides of a different sequence. Separate methyl-PSM FDR estimates were also found to be unreliable due to prevalent sources of false positive methyl-PSMs that produce high peptide identity score distributions. Incorrect methylation site localizations, peptides containing cysteinyl-S-β-propionamide, and methylated glutamic or aspartic acid residues can partially, but not wholly, account for these false positive methyl-PSMs. Together, these results indicate that the target-decoy approach is an unreliable means of estimating methyl-PSM FDRs and methyl-PSM filtering criteria. We suggest that orthogonal methylpeptide validation (e.g. heavy-methyl SILAC or its offshoots) should be considered a prerequisite for obtaining high confidence methyl-PSMs in large scale LC-MS/MS methylation site discovery experiments and make recommendations on how to reduce methyl-PSM FDRs in samples not amenable to heavy isotope labeling. Data are available via ProteomeXchange with the data identifier PXD002857. PMID:26699799
Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods
NASA Astrophysics Data System (ADS)
Thurin, J.; Brossier, R.; Métivier, L.
2017-12-01
Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012
NASA Astrophysics Data System (ADS)
Graven, H. D.; Gruber, N.
2011-12-01
The 14C-free fossil carbon added to atmospheric CO2 by combustion dilutes the atmospheric 14C/C ratio (Δ14C), potentially providing a means to verify fossil CO2 emissions calculated using economic inventories. However, sources of 14C from nuclear power generation and spent fuel reprocessing can counteract this dilution and may bias 14C/C-based estimates of fossil fuel-derived CO2 if these nuclear influences are not correctly accounted for. Previous studies have examined nuclear influences on local scales, but the potential for continental-scale influences on Δ14C has not yet been explored. We estimate annual 14C emissions from each nuclear site in the world and conduct an Eulerian transport modeling study to investigate the continental-scale, steady-state gradients of Δ14C caused by nuclear activities and fossil fuel combustion. Over large regions of Europe, North America and East Asia, nuclear enrichment may offset at least 20% of the fossil fuel dilution in Δ14C, corresponding to potential biases of more than -0.25 ppm in the CO2 attributed to fossil fuel emissions, larger than the bias from plant and soil respiration in some areas. Model grid cells including high 14C-release reactors or fuel reprocessing sites showed much larger nuclear enrichment, despite the coarse model resolution of 1.8°×1.8°. The recent growth of nuclear 14C emissions increased the potential nuclear bias over 1985-2005, suggesting that changing nuclear activities may complicate the use of Δ14C observations to identify trends in fossil fuel emissions. The magnitude of the potential nuclear bias is largely independent of the choice of reference station in the context of continental-scale Eulerian transport and inversion studies, but could potentially be reduced by an appropriate choice of reference station in the context of local-scale assessments.
Large-scale model-based assessment of deer-vehicle collision risk.
Hothorn, Torsten; Brandl, Roland; Müller, Jörg
2012-01-01
Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.
Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk
Hothorn, Torsten; Brandl, Roland; Müller, Jörg
2012-01-01
Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; ...
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Characterization of spray-induced turbulence using fluorescence PIV
NASA Astrophysics Data System (ADS)
van der Voort, Dennis D.; Dam, Nico J.; Clercx, Herman J. H.; Water, Willem van de
2018-07-01
The strong shear induced by the injection of liquid sprays at high velocities induces turbulence in the surrounding medium. This, in turn, influences the motion of droplets as well as the mixing of air and vapor. Using fluorescence-based tracer particle image velocimetry, the velocity field surrounding 125-135 m/s sprays exiting a 200-μm nozzle is analyzed. For the first time, the small- and large-scale turbulence characteristics of the gas phase surrounding a spray has been measured simultaneously, using a large eddy model to determine the sub-grid scales. This further allows the calculation of the Stokes numbers of droplets, which indicates the influence of turbulence on their motion. The measurements lead to an estimate of the dissipation rate ɛ ≈ 35 m2 s^{-3}, a microscale Reynolds number Re_{λ } ≈ 170, and a Kolmogorov length scale of η ≈ 10^{-4} m. Using these dissipation rates to convert a droplet size distribution to a distribution of Stokes numbers, we show that only the large scale motion of turbulence disperses the droplet in the current case, but the small scales will grow in importance with increasing levels of atomization and ambient pressures.
Driving terrestrial ecosystem models from space
NASA Technical Reports Server (NTRS)
Waring, R. H.
1993-01-01
Regional air pollution, land-use conversion, and projected climate change all affect ecosystem processes at large scales. Changes in vegetation cover and growth dynamics can impact the functioning of ecosystems, carbon fluxes, and climate. As a result, there is a need to assess and monitor vegetation structure and function comprehensively at regional to global scales. To provide a test of our present understanding of how ecosystems operate at large scales we can compare model predictions of CO2, O2, and methane exchange with the atmosphere against regional measurements of interannual variation in the atmospheric concentration of these gases. Recent advances in remote sensing of the Earth's surface are beginning to provide methods for estimating important ecosystem variables at large scales. Ecologists attempting to generalize across landscapes have made extensive use of models and remote sensing technology. The success of such ventures is dependent on merging insights and expertise from two distinct fields. Ecologists must provide the understanding of how well models emulate important biological variables and their interactions; experts in remote sensing must provide the biophysical interpretation of complex optical reflectance and radar backscatter data.
Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia
NASA Astrophysics Data System (ADS)
Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.
2017-12-01
Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
DETERMINING THE LARGE-SCALE ENVIRONMENTAL DEPENDENCE OF GAS-PHASE METALLICITY IN DWARF GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglass, Kelly A.; Vogeley, Michael S., E-mail: kelly.a.douglass@drexel.edu
2017-01-10
We study how the cosmic environment affects galaxy evolution in the universe by comparing the metallicities of dwarf galaxies in voids with dwarf galaxies in more dense regions. Ratios of the fluxes of emission lines, particularly those of the forbidden [O iii] and [S ii] transitions, provide estimates of a region’s electron temperature and number density. From these two quantities and the emission line fluxes [O ii] λ 3727, [O iii] λ 4363, and [O iii] λλ 4959, 5007, we estimate the abundance of oxygen with the direct T{sub e} method. We estimate the metallicity of 42 blue, star-forming voidmore » dwarf galaxies and 89 blue, star-forming dwarf galaxies in more dense regions using spectroscopic observations from the Sloan Digital Sky Survey Data Release 7, as reprocessed in the MPA-JHU value-added catalog. We find very little difference between the two sets of galaxies, indicating little influence from the large-scale environment on their chemical evolution. Of particular interest are a number of extremely metal-poor dwarf galaxies that are less prevalent in voids than in the denser regions.« less
NASA Astrophysics Data System (ADS)
Popov, V. D.; Khamidullina, N. M.
2006-10-01
In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
Gould, William R.; Patla, Debra A.; Daley, Rob; Corn, Paul Stephen; Hossack, Blake R.; Bennetts, Robert E.; Peterson, Charles R.
2012-01-01
Monitoring of natural resources is crucial to ecosystem conservation, and yet it can pose many challenges. Annual surveys for amphibian breeding occupancy were conducted in Yellowstone and Grand Teton National Parks over a 4-year period (2006–2009) at two scales: catchments (portions of watersheds) and individual wetland sites. Catchments were selected in a stratified random sample with habitat quality and ease of access serving as strata. All known wetland sites with suitable habitat were surveyed within selected catchments. Changes in breeding occurrence of tiger salamanders, boreal chorus frogs, and Columbia-spotted frogs were assessed using multi-season occupancy estimation. Numerous a priori models were considered within an information theoretic framework including those with catchment and site-level covariates. Habitat quality was the most important predictor of occupancy. Boreal chorus frogs demonstrated the greatest increase in breeding occupancy at the catchment level. Larger changes for all 3 species were detected at the finer site-level scale. Connectivity of sites explained occupancy rates more than other covariates, and may improve understanding of the dynamic processes occurring among wetlands within this ecosystem. Our results suggest monitoring occupancy at two spatial scales within large study areas is feasible and informative.
Scale criticality in estimating ecosystem carbon dynamics
Zhao, Shuqing; Liu, Shuguang
2014-01-01
Scaling is central to ecology and Earth system sciences. However, the importance of scale (i.e. resolution and extent) for understanding carbon dynamics across scales is poorly understood and quantified. We simulated carbon dynamics under a wide range of combinations of resolution (nine spatial resolutions of 250 m, 500 m, 1 km, 2 km, 5 km, 10 km, 20 km, 50 km, and 100 km) and extent (57 geospatial extents ranging from 108 to 1 247 034 km2) in the southeastern United States to explore the existence of scale dependence of the simulated regional carbon balance. Results clearly show the existence of a critical threshold resolution for estimating carbon sequestration within a given extent and an error limit. Furthermore, an invariant power law scaling relationship was found between the critical resolution and the spatial extent as the critical resolution is proportional to An (n is a constant, and A is the extent). Scale criticality and the power law relationship might be driven by the power law probability distributions of land surface and ecological quantities including disturbances at landscape to regional scales. The current overwhelming practices without considering scale criticality might have largely contributed to difficulties in balancing carbon budgets at regional and global scales.
NASA Astrophysics Data System (ADS)
Deo, R. K.; Domke, G. M.; Russell, M.; Woodall, C. W.
2017-12-01
Landsat data have been widely used to support strategic forest inventory and management decisions despite the limited success of passive optical remote sensing for accurate estimation of aboveground biomass (AGB). The archive of publicly available Landsat data, available at 30-m spatial resolutions since 1984, has been a valuable resource for cost-effective large-area estimation of AGB to inform national requirements such as for the US national greenhouse gas inventory (NGHGI). In addition, other optical satellite data such as MODIS imagery of wider spatial coverage and higher temporal resolution are enriching the domain of spatial predictors for regional scale mapping of AGB. Because NGHGIs require national scale AGB information and there are tradeoffs in the prediction accuracy versus operational efficiency of Landsat, this study evaluated the impact of various resolutions of Landsat predictors on the accuracy of regional AGB models across three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We used recent national forest inventory (NFI) data with numerous Landsat-derived predictors at ten different spatial resolutions ranging from 30 to 1000 m to understand the optimal spatial resolution of the optical data for enhanced spatial inventory of AGB for NGHGI reporting. Ten generic spatial models at different spatial resolutions were developed for all sites and large-area estimates were evaluated (i) at the county-level against the independent designed-based estimates via the US NFI Evalidator tool and (ii) within a large number of strips ( 1 km wide) predicted via LiDAR metrics at a high spatial resolution. The county-level estimates by the Evalidator and Landsat models were statistically equivalent and produced coefficients of determination (R2) above 0.85 that varied with sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of decreasing resolutions. The Landsat-based total AGB estimates within the strips against the total AGB obtained using LiDAR metrics did not differ significantly and were within ±15 Mg/ha for each of the sites. We conclude that the optical satellite data at resolutions up to 1000 m provide acceptable accuracy for the US' NGHGI.
Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.
2013-01-01
Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
Motion estimation under location uncertainty for turbulent fluid flows
NASA Astrophysics Data System (ADS)
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
USDA-ARS?s Scientific Manuscript database
Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...
Eavesdropping on the Arctic: Automated bioacoustics reveal dynamics in songbird breeding phenology
Ellis, Daniel P. W.; Pérez, Jonathan H.; Wingfield, John C.; Boelman, Natalie T.
2018-01-01
Bioacoustic networks could vastly expand the coverage of wildlife monitoring to complement satellite observations of climate and vegetation. This approach would enable global-scale understanding of how climate change influences phenomena such as migratory timing of avian species. The enormous data sets that autonomous recorders typically generate demand automated analyses that remain largely undeveloped. We devised automated signal processing and machine learning approaches to estimate dates on which songbird communities arrived at arctic breeding grounds. Acoustically estimated dates agreed well with those determined via traditional surveys and were strongly related to the landscape’s snow-free dates. We found that environmental conditions heavily influenced daily variation in songbird vocal activity, especially before egg laying. Our novel approaches demonstrate that variation in avian migratory arrival can be detected autonomously. Large-scale deployment of this innovation in wildlife monitoring would enable the coverage necessary to assess and forecast changes in bird migration in the face of climate change. PMID:29938220
Aeration costs in stirred-tank and bubble column bioreactors
Humbird, D.; Davis, R.; McMillan, J. D.
2017-08-10
To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less
Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.
Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin
2018-05-23
Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Hay, C.; Mitrovica, J. X.; Little, C. M.; Ponte, R. M.; Tingley, M.
2017-12-01
Understanding observed spatial variations in centennial relative sea level trends on the United States east coast has important scientific and societal applications. Past studies based on models and proxies variously suggest roles for crustal displacement, ocean dynamics, and melting of the Greenland ice sheet. Here we perform joint Bayesian inference on regional relative sea level, vertical land motion, and absolute sea level fields based on tide gauge records and GPS data. Posterior solutions show that regional vertical land motion explains most (80% median estimate) of the spatial variance in the large-scale relative sea level trend field on the east coast over 1900-2016. The posterior estimate for coastal absolute sea level rise is remarkably spatially uniform compared to previous studies, with a spatial average of 1.4-2.3 mm/yr (95% credible interval). Results corroborate glacial isostatic adjustment models and reveal that meaningful long-period, large-scale vertical velocity signals can be extracted from short GPS records.
Aeration costs in stirred-tank and bubble column bioreactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbird, D.; Davis, R.; McMillan, J. D.
To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less
Activity flow over resting-state networks shapes cognitive task activations.
Cole, Michael W; Ito, Takuya; Bassett, Danielle S; Schultz, Douglas H
2016-12-01
Resting-state functional connectivity (FC) has helped reveal the intrinsic network organization of the human brain, yet its relevance to cognitive task activations has been unclear. Uncertainty remains despite evidence that resting-state FC patterns are highly similar to cognitive task activation patterns. Identifying the distributed processes that shape localized cognitive task activations may help reveal why resting-state FC is so strongly related to cognitive task activations. We found that estimating task-evoked activity flow (the spread of activation amplitudes) over resting-state FC networks allowed prediction of cognitive task activations in a large-scale neural network model. Applying this insight to empirical functional MRI data, we found that cognitive task activations can be predicted in held-out brain regions (and held-out individuals) via estimated activity flow over resting-state FC networks. This suggests that task-evoked activity flow over intrinsic networks is a large-scale mechanism explaining the relevance of resting-state FC to cognitive task activations.
Activity flow over resting-state networks shapes cognitive task activations
Cole, Michael W.; Ito, Takuya; Bassett, Danielle S.; Schultz, Douglas H.
2016-01-01
Resting-state functional connectivity (FC) has helped reveal the intrinsic network organization of the human brain, yet its relevance to cognitive task activations has been unclear. Uncertainty remains despite evidence that resting-state FC patterns are highly similar to cognitive task activation patterns. Identifying the distributed processes that shape localized cognitive task activations may help reveal why resting-state FC is so strongly related to cognitive task activations. We found that estimating task-evoked activity flow (the spread of activation amplitudes) over resting-state FC networks allows prediction of cognitive task activations in a large-scale neural network model. Applying this insight to empirical functional MRI data, we found that cognitive task activations can be predicted in held-out brain regions (and held-out individuals) via estimated activity flow over resting-state FC networks. This suggests that task-evoked activity flow over intrinsic networks is a large-scale mechanism explaining the relevance of resting-state FC to cognitive task activations. PMID:27723746
NASA Astrophysics Data System (ADS)
Wang, J.; Xue, Y.; Forman, B. A.; Girotto, M.; Reichle, R. H.
2017-12-01
The Gravity and Recovery Climate Experiment (GRACE) has revolutionized large-scale remote sensing of the Earth's terrestrial hydrologic cycle and has provided an unprecedented observational constraint for global land surface models. However, the coarse-scale (in space and time), vertically-integrated measure of terrestrial water storage (TWS) limits GRACE's applicability to smaller scale hydrologic applications. In order to enhance model-based estimates of TWS while effectively adding resolution (in space and time) to the coarse-scale TWS retrievals, a multi-variate, multi-sensor data assimilation framework is presented here that simultaneously assimilates gravimetric retrievals of TWS in conjunction with passive microwave (PMW) brightness temperature (Tb) observations over snow-covered terrain. The framework uses the NASA Catchment Land Surface Model (Catchment) and an ensemble Kalman filter (EnKF). A synthetic assimilation experiment is presented for the Volga river basin in Russia. The skill of the output from the assimilation of synthetic observations is compared with that of model estimates generated without the benefit of assimilating the synthetic observations. It is shown that the EnKF framework improves modeled estimates of TWS, snow depth, and snow mass (a.k.a. snow water equivalent). The data assimilation routine produces a conditioned (updated) estimate that is more accurate and contains less uncertainty during both the snow accumulation phase of the snow season as well as during the snow ablation season.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID:26402522
The brief multidimensional students' life satisfaction scale-college version.
Zullig, Keith J; Huebner, E Scott; Patton, Jon M; Murray, Karen A
2009-01-01
To investigate the psychometric properties of the BMSLSS-College among 723 college students. Internal consistency estimates explored scale reliability, factor analysis explored construct validity, and known-groups validity was assessed using the National College Youth Risk Behavior Survey and Harvard School of Public Health College Alcohol Study. Criterion-related validity was explored through analyses with the CDC's health-related quality of life scale and a social isolation scale. Acceptable internal consistency reliability, construct, known-groups, and criterion-related validity were established. Findings offer preliminary support for the BMSLSS-C; it could be useful in large-scale research studies, applied screening contexts, and for program evaluation purposes toward achieving Healthy People 2010 objectives.
An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.
2012-09-01
Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.
2017-01-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.
Impact of post-Born lensing on the CMB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratten, Geraint; Lewis, Antony, E-mail: G.Pratten@Sussex.ac.uk, E-mail: antony@cosmologist.info
Lensing of the CMB is affected by post-Born lensing, producing corrections to the convergence power spectrum and introducing field rotation. We show numerically that the lensing convergence power spectrum is affected at the ∼< 0.2% level on accessible scales, and that this correction and the field rotation are negligible for observations with arcminute beam and noise levels ∼> 1 μK arcmin. The field rotation generates ∼ 2.5% of the total lensing B-mode polarization amplitude (0.2% in power on small scales), but has a blue spectrum on large scales, making it highly subdominant to the convergence B modes on scales wheremore » they are a source of confusion for the signal from primordial gravitational waves. Since the post-Born signal is non-linear, it also generates a bispectrum with the convergence. We show that the post-Born contributions to the bispectrum substantially change the shape predicted from large-scale structure non-linearities alone, and hence must be included to estimate the expected total signal and impact of bispectrum biases on CMB lensing reconstruction quadratic estimators and other observables. The field-rotation power spectrum only becomes potentially detectable for noise levels || 1 μK arcmin, but its bispectrum with the convergence may be observable at ∼ 3σ with Stage IV observations. Rotation-induced and convergence-induced B modes are slightly correlated by the bispectrum, and the bispectrum also produces additional contributions to the lensed BB power spectrum.« less
Sinsabaugh, Robert L; Moorhead, Daryl L; Xu, Xiaofeng; Litvak, Marcy E
2017-06-01
The carbon use efficiency of plants (CUE a ) and microorganisms (CUE h ) determines rates of biomass turnover and soil carbon sequestration. We evaluated the hypothesis that CUE a and CUE h counterbalance at a large scale, stabilizing microbial growth (μ) as a fraction of gross primary production (GPP). Collating data from published studies, we correlated annual CUE a , estimated from satellite imagery, with locally determined soil CUE h for 100 globally distributed sites. Ecosystem CUE e , the ratio of net ecosystem production (NEP) to GPP, was estimated for each site using published models. At the ecosystem scale, CUE a and CUE h were inversely related. At the global scale, the apparent temperature sensitivity of CUE h with respect to mean annual temperature (MAT) was similar for organic and mineral soils (0.029°C -1 ). CUE a and CUE e were inversely related to MAT, with apparent sensitivities of -0.009 and -0.032°C -1 , respectively. These trends constrain the ratio μ : GPP (= (CUE a × CUE h )/(1 - CUE e )) with respect to MAT by counterbalancing the apparent temperature sensitivities of the component processes. At the ecosystem scale, the counterbalance is effected by modulating soil organic matter stocks. The results suggest that a μ : GPP value of c. 0.13 is a homeostatic steady state for ecosystem carbon fluxes at a large scale. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
New perspectives on self-similarity for shallow thrust earthquakes
NASA Astrophysics Data System (ADS)
Denolle, Marine A.; Shearer, Peter M.
2016-09-01
Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.
Andrich, David; Marais, Ida; Humphry, Stephen Mark
2015-01-01
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The consequence is that the proficiencies of the more proficient students are increased relative to those of the less proficient. Not controlling the guessing bias underestimates the progress of students across 7 years of schooling with important educational implications. PMID:29795871
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
Moses, C.S.; Andrefouet, S.; Kranenburg, C.; Muller-Karger, F. E.
2009-01-01
Using imagery at 30 m spatial resolution from the most recent Landsat satellite, the Landsat 7 Enhanced Thematic Mapper Plus (ETM+), we scale up reef metabolic productivity and calcification from local habitat-scale (10 -1 to 100 km2) measurements to regional scales (103 to 104 km2). Distribution and spatial extent of the North Florida Reef Tract (NFRT) habitats come from supervised classification of the Landsat imagery within independent Landsat-derived Millennium Coral Reef Map geomorphologic classes. This system minimizes the depth range and variability of benthic habitat characteristics found in the area of supervised classification and limits misclassification. Classification of Landsat imagery into 5 biotopes (sand, dense live cover, sparse live cover, seagrass, and sparse seagrass) by geomorphologic class is >73% accurate at regional scales. Based on recently published habitat-scale in situ metabolic measurements, gross production (P = 3.01 ?? 109 kg C yr -1), excess production (E = -5.70 ?? 108 kg C yr -1), and calcification (G = -1.68 ?? 106 kg CaCO 3 yr-1) are estimated over 2711 km2 of the NFRT. Simple models suggest sensitivity of these values to ocean acidification, which will increase local dissolution of carbonate sediments. Similar approaches could be applied over large areas with poorly constrained bathymetry or water column properties and minimal metabolic sampling. This tool has potential applications for modeling and monitoring large-scale environmental impacts on reef productivity, such as the influence of ocean acidification on coral reef environments. ?? Inter-Research 2009.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zittel, P.F.
1994-09-10
The solid-fuel rocket motors of large space launch vehicles release gases and particles that may significantly affect stratospheric ozone densities along the vehicle's path. In this study, standard rocket nozzle and flowfield computer codes have been used to characterize the exhaust gases and particles through the afterburning region of the solid-fuel motors of the Titan IV launch vehicle. The models predict that a large fraction of the HCl gas exhausted by the motors is converted to Cl and Cl2 in the plume afterburning region. Estimates of the subsequent chemistry suggest that on expansion into the ambient daytime stratosphere, the highlymore » reactive chlorine may significantly deplete ozone in a cylinder around the vehicle track that ranges from 1 to 5 km in diameter over the altitude range of 15 to 40 km. The initial ozone depletion is estimated to occur on a time scale of less than 1 hour. After the initial effects, the dominant chemistry of the problem changes, and new models are needed to follow the further expansion, or closure, of the ozone hole on a longer time scale.« less
GAIA: A WINDOW TO LARGE-SCALE MOTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nusser, Adi; Branchini, Enzo; Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu
2012-08-10
Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, becausemore » of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.« less
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-01-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins. PMID:25672826
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
A low-cost iron-cadmium redox flow battery for large-scale energy storage
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.
2016-10-01
The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
NASA Astrophysics Data System (ADS)
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-02-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.
Trains of large Kelvin-Helmholtz billows observed in the Kuroshio above a seamount
NASA Astrophysics Data System (ADS)
Chang, Ming-Huei; Jheng, Sin-Ya; Lien, Ren-Chieh
2016-08-01
Trains of large Kelvin-Helmholtz (KH) billows within the Kuroshio current at ~230 m depth off southeastern Taiwan and above a seamount were observed by shipboard instruments. The trains of large KH billows were present in a strong shear band along the 0.55 m s-1 isotach within the Kuroshio core; they are presumably produced by flow interactions with the rapidly changing topography. Each individual billow, resembling a cat's eye, had a horizontal length scale of 200 m, a vertical scale of 100 m, and a timescale of 7 min, near the local buoyancy frequency. Overturns were observed frequently in the billow cores and the upper eyelids. The turbulent kinetic energy dissipation rates estimated using the Thorpe scale had an average value of O(10-4) W kg-1 and a maximum value of O(10-3) W kg-1. The turbulence mixing induced by the KH billows may exchange Kuroshio water with the surrounding water masses.
Universal properties of knotted polymer rings.
Baiesi, M; Orlandini, E
2012-09-01
By performing Monte Carlo sampling of N-steps self-avoiding polygons embedded on different Bravais lattices we explore the robustness of universality in the entropic, metric, and geometrical properties of knotted polymer rings. In particular, by simulating polygons with N up to 10(5) we furnish a sharp estimate of the asymptotic values of the knot probability ratios and show their independence on the lattice type. This universal feature was previously suggested, although with different estimates of the asymptotic values. In addition, we show that the scaling behavior of the mean-squared radius of gyration of polygons depends on their knot type only through its correction to scaling. Finally, as a measure of the geometrical self-entanglement of the self-avoiding polygons we consider the standard deviation of the writhe distribution and estimate its power-law behavior in the large N limit. The estimates of the power exponent do depend neither on the lattice nor on the knot type, strongly supporting an extension of the universality property to some features of the geometrical entanglement.
Voluntary EMG-to-force estimation with a multi-scale physiological muscle model
2013-01-01
Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560
Hill, J Colin; Ferraro, Simone; Battaglia, Nick; Liu, Jia; Spergel, David N
2016-07-29
The kinematic Sunyaev-Zel'dovich (KSZ) effect-the Doppler boosting of cosmic microwave background (CMB) photons due to Compton scattering off free electrons with nonzero bulk velocity-probes the abundance and the distribution of baryons in the Universe. All KSZ measurements to date have explicitly required spectroscopic redshifts. Here, we implement a novel estimator for the KSZ-large-scale structure cross-correlation based on projected fields: it does not require redshift estimates for individual objects, allowing KSZ measurements from large-scale imaging surveys. We apply this estimator to cleaned CMB temperature maps constructed from Planck and WMAP data and a galaxy sample from the Wide-field Infrared Survey Explorer (WISE). We measure the KSZ effect at 3.8σ-4.5σ significance, depending on the use of additional WISE galaxy bias constraints. We verify that our measurements are robust to possible dust emission from the WISE galaxies. Assuming the standard Λ cold dark matter cosmology, we directly constrain (f_{b}/0.158)(f_{free}/1.0)=1.48±0.19 (statistical error only) at redshift z≈0.4, where f_{b} is the fraction of matter in baryonic form and f_{free} is the free electron fraction. This is the tightest KSZ-derived constraint reported to date on these parameters. Astronomers have long known that baryons do not trace dark matter on ∼ kiloparsec scales and there has been strong evidence that galaxies are baryon poor. The consistency between the f_{b} value found here and the values inferred from analyses of the primordial CMB and big bang nucleosynthesis verifies that baryons approximately trace the dark matter distribution down to ∼ megaparsec scales. While our projected-field estimator is already competitive with other KSZ approaches when applied to current data sets (because we are able to use the full-sky WISE photometric survey), it will yield enormous signal-to-noise ratios when applied to upcoming high-resolution, multifrequency CMB surveys.
NASA Astrophysics Data System (ADS)
Hill, J. Colin; Ferraro, Simone; Battaglia, Nick; Liu, Jia; Spergel, David N.
2016-07-01
The kinematic Sunyaev-Zel'dovich (KSZ) effect—the Doppler boosting of cosmic microwave background (CMB) photons due to Compton scattering off free electrons with nonzero bulk velocity—probes the abundance and the distribution of baryons in the Universe. All KSZ measurements to date have explicitly required spectroscopic redshifts. Here, we implement a novel estimator for the KSZ—large-scale structure cross-correlation based on projected fields: it does not require redshift estimates for individual objects, allowing KSZ measurements from large-scale imaging surveys. We apply this estimator to cleaned CMB temperature maps constructed from Planck and WMAP data and a galaxy sample from the Wide-field Infrared Survey Explorer (WISE). We measure the KSZ effect at 3.8 σ - 4.5 σ significance, depending on the use of additional WISE galaxy bias constraints. We verify that our measurements are robust to possible dust emission from the WISE galaxies. Assuming the standard Λ cold dark matter cosmology, we directly constrain (fb/0.158 ) (ffree/1.0 ) =1.48 ±0.19 (statistical error only) at redshift z ≈0.4 , where fb is the fraction of matter in baryonic form and ffree is the free electron fraction. This is the tightest KSZ-derived constraint reported to date on these parameters. Astronomers have long known that baryons do not trace dark matter on ˜ kiloparsec scales and there has been strong evidence that galaxies are baryon poor. The consistency between the fb value found here and the values inferred from analyses of the primordial CMB and big bang nucleosynthesis verifies that baryons approximately trace the dark matter distribution down to ˜ megaparsec scales. While our projected-field estimator is already competitive with other KSZ approaches when applied to current data sets (because we are able to use the full-sky WISE photometric survey), it will yield enormous signal-to-noise ratios when applied to upcoming high-resolution, multifrequency CMB surveys.
NASA Astrophysics Data System (ADS)
Deo, Ram K.; Domke, Grant M.; Russell, Matthew B.; Woodall, Christopher W.; Andersen, Hans-Erik
2018-05-01
Aboveground biomass (AGB) estimates for regional-scale forest planning have become cost-effective with the free access to satellite data from sensors such as Landsat and MODIS. However, the accuracy of AGB predictions based on passive optical data depends on spatial resolution and spatial extent of target area as fine resolution (small pixels) data are associated with smaller coverage and longer repeat cycles compared to coarse resolution data. This study evaluated various spatial resolutions of Landsat-derived predictors on the accuracy of regional AGB models at three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We combined national forest inventory data with Landsat-derived predictors at spatial resolutions ranging from 30–1000 m to understand the optimal spatial resolution of optical data for large-area (regional) AGB estimation. Ten generic models were developed using the data collected in 2014, 2015 and 2016, and the predictions were evaluated (i) at the county-level against the estimates of the USFS Forest Inventory and Analysis Program which relied on EVALIDator tool and national forest inventory data from the 2009–2013 cycle and (ii) within a large number of strips (~1 km wide) predicted via LiDAR metrics at 30 m spatial resolution. The county-level estimates by the EVALIDator and Landsat models were highly related (R 2 > 0.66), although the R 2 varied significantly across sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of coarser resolution. The Landsat-based total AGB estimates were larger than the LiDAR-based total estimates within the strips, however the mean of AGB predictions by LiDAR were mostly within one-standard deviations of the mean predictions obtained from the Landsat-based model at any of the resolutions. We conclude that satellite data at resolutions up to 1000 m provide acceptable accuracy for continental scale analysis of AGB.
NASA Astrophysics Data System (ADS)
Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe
2016-08-01
Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Karbasi, Amin; Crawford, Forrest W.
2017-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924
Estimating the Size of a Large Network and its Communities from a Random Sample.
Chen, Lin; Karbasi, Amin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
The Variance of Intraclass Correlations in Three- and Four-Level Models
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
New Directions: Understanding Interactions of Air Quality and Climate Change at Regional Scales
The estimates of the short-lived climate forcers’ (SLCFs) impacts and mitigation effects on the radiation balance have large uncertainty because the current global model set-ups and simulations contain simplified parameterizations and do not completely cover the full range of air...
A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment
NASA Technical Reports Server (NTRS)
Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.;
2013-01-01
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
HAPEX-Sahel: A large-scale study of land-atmosphere interactions in the semi-arid tropics
NASA Technical Reports Server (NTRS)
Gutorbe, J-P.; Lebel, T.; Tinga, A.; Bessemoulin, P.; Brouwer, J.; Dolman, A.J.; Engman, E. T.; Gash, J. H. C.; Hoepffner, M.; Kabat, P.
1994-01-01
The Hydrologic Atmospheric Pilot EXperiment in the Sahel (HAPEX-Sahel) was carried out in Niger, West Africa, during 1991-1992, with an intensive observation period (IOP) in August-October 1992. It aims at improving the parameteriztion of land surface atmospheric interactions at the Global Circulation Model (GCM) gridbox scale. The experiment combines remote sensing and ground based measurements with hydrological and meteorological modeling to develop aggregation techniques for use in large scale estimates of the hydrological and meteorological behavior of large areas in the Sahel. The experimental strategy consisted of a period of intensive measurements during the transition period of the rainy to the dry season, backed up by a series of long term measurements in a 1 by 1 deg square in Niger. Three 'supersites' were instrumented with a variety of hydrological and (micro) meteorological equipment to provide detailed information on the surface energy exchange at the local scale. Boundary layer measurements and aircraft measurements were used to provide information at scales of 100-500 sq km. All relevant remote sensing images were obtained for this period. This program of measurements is now being analyzed and an extensive modelling program is under way to aggregate the information at all scales up to the GCM grid box scale. The experimental strategy and some preliminary results of the IOP are described.
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos
2009-05-01
instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors
Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang
2008-01-01
Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146
Measuring and correcting wobble in large-scale transmission radiography.
Rogers, Thomas W; Ollier, James; Morton, Edward J; Griffin, Lewis D
2017-01-01
Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.
NASA Astrophysics Data System (ADS)
Hendrickx, Jan M. H.; Kleissl, Jan; Gómez Vélez, Jesús D.; Hong, Sung-ho; Fábrega Duque, José R.; Vega, David; Moreno Ramírez, Hernán A.; Ogden, Fred L.
2007-04-01
Accurate estimation of sensible and latent heat fluxes as well as soil moisture from remotely sensed satellite images poses a great challenge. Yet, it is critical to face this challenge since the estimation of spatial and temporal distributions of these parameters over large areas is impossible using only ground measurements. A major difficulty for the calibration and validation of operational remote sensing methods such as SEBAL, METRIC, and ALEXI is the ground measurement of sensible heat fluxes at a scale similar to the spatial resolution of the remote sensing image. While the spatial length scale of remote sensing images covers a range from 30 m (LandSat) to 1000 m (MODIS) direct methods to measure sensible heat fluxes such as eddy covariance (EC) only provide point measurements at a scale that may be considerably smaller than the estimate obtained from a remote sensing method. The Large Aperture scintillometer (LAS) flux footprint area is larger (up to 5000 m long) and its spatial extent better constraint than that of EC systems. Therefore, scintillometers offer the unique possibility of measuring the vertical flux of sensible heat averaged over areas comparable with several pixels of a satellite image (up to about 40 Landsat thermal pixels or about 5 MODIS thermal pixels). The objective of this paper is to present our experiences with an existing network of seven scintillometers in New Mexico and a planned network of three scintillometers in the humid tropics of Panama and Colombia.