Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.
Wang, Jiaoyu; Zhang, Zhen; Wang, Yanli; Li, Ling; Chai, Rongyao; Mao, Xueqin; Jiang, Hua; Qiu, Haiping; Du, Xinfa; Lin, Fucheng; Sun, Guochang
2013-01-01
Peroxisomes participate in various important metabolisms and are required in pathogenicity of fungal plant pathogens. Peroxisomal matrix proteins are imported from cytoplasm into peroxisomes through peroxisomal targeting signal 1 (PTS1) or peroxisomal targeting signal 2 (PTS2) import pathway. PEX5 and PEX7 genes participate in the two pathways respectively. The involvement of PEX7 mediated PTS2 import pathway in fungal pathogenicity has been documented, while that of PTS1 remains unclear. Through null mutant analysis of MoPEX5, the PEX5 homolog in Magnaporthe oryzae, we report the crucial roles of PTS1 pathway in the development and host infection in the rice blast fungus, and compared with those of PTS2. We found that MoPEX5 disruption specifically blocked the PTS1 pathway. Δmopex5 was unable to use lipids as sole carbon source and lost pathogenicity completely. Similar as Δmopex7, Δmopex5 exhibited significant reduction in lipid utilization and mobilization, appressorial turgor genesis and H2O2 resistance. Additionally, Δmopex5 presented some distinct defects which were undetected in Δmopex7 in vegetative growth, conidial morphogenesis, appressorial morphogenesis and melanization. The results indicated that the PTS1 peroxisomal import pathway, in addition to PTS2, is required for fungal development and pathogenicity of the rice blast fungus, and also, as a main peroxisomal import pathway, played a more predominant role than PTS2. PMID:23405169
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Future projection of design storms using a GCM-informed weather generator
NASA Astrophysics Data System (ADS)
KIm, T. W.; Wi, S.; Valdés-Pineda, R.; Valdés, J. B.
2017-12-01
The rainfall Intensity-Duration-Frequency (IDF) curves are one of the most common tools used to provide planners with a description of the frequency of extreme rainfall events of various intensities and durations. Therefore deriving appropriate IDF estimates is important to avoid malfunctions of water structures that cause huge damage. Evaluating IDF estimates in the context of climate change has become more important because projections from climate models suggest that the frequency of intense rainfall events will increase in the future due to the increase in greenhouse gas emissions. In this study, the Bartlett-Lewis (BL) stochastic rainfall model is employed to generate annual maximum series of various sub-daily durations for test basins of the Model Parameter Estimation Experiment (MOPEX) project, and to derive the IDF curves in the context of climate changes projected by the North American Regional Climate Change (NARCCAP) models. From our results, it has been found that the observed annual rainfall maximum series is reasonably represented by the synthetic annual maximum series generated by the BL model. The observed data is perturbed by change factors to incorporate the NARCCAP climate change scenarios into the IDF estimates. The future IDF curves show a significant difference from the historical IDF curves calculated for the period 1968-2000. Overall, the projected IDF curves show an increasing trend over time. The impacts of changes in extreme rainfall on the hydrologic response of the MOPEX basins are also explored. Acknowledgement: This research was supported by a grant [MPSS-NH-2015-79] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government.
A simple topography-driven, calibration-free runoff generation model
NASA Astrophysics Data System (ADS)
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.
2013-12-01
With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
NASA Astrophysics Data System (ADS)
Guastella, Peter; Rebull, L.; DeWolf, C.; Johnson, C. H.; McDonald, D. W.; Schaefers, J.; Spuck, T.
2009-01-01
We present several learning activities that were performed to explore YSOs within LDN 425 and 981. Classroom instruction on the characteristics of YSOs were supplemented with hands-on learning of software needed to search Spitzer mosaics for YSO candidates. Structured activities were used to teach the intricacies of MOPEX, ATP and Excel. Excel worksheets were developed to help students convert flux densities into magnitudes. These magnitudes were then used to create Spectral Energy Distributions, (SED) plotting the energy against the wavelength of each candidate YSO. This research was made possible through the Spitzer Space Telescope Research Program for Teachers and Students and was funded by the Spitzer Science Center (SSC) and the National Optical Astronomy Observatory (NOAO). Please see our companion education poster by McDonald, et al. titled "Spitzer - Hot and Colorful Student Activities" and our research poster by Johnson et al. entitled "Star Formation in Lynds Dark Nebulae."
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
Exploring the correlation between annual precipitation and potential evaporation
NASA Astrophysics Data System (ADS)
Chen, X.; Buchberger, S. G.
2017-12-01
The interdependence between precipitation and potential evaporation is closely related to the classic Budyko framework. In this study, a systematic investigation of the correlation between precipitation and potential evaporation at the annual time step is conducted at both point scale and watershed scale. The point scale precipitation and potential evaporation data over the period of 1984-2015 are collected from 259 weather stations across the United States. The watershed scale precipitation data of 203 watersheds across the United States are obtained from the Model Parameter Estimation Experiment (MOPEX) dataset from 1983 to 2002; and potential evaporation data of these 203 watersheds in the same period are obtained from a remote-sensing algorithm. The results show that majority of the weather stations (77%) and watersheds (79%) exhibit a statistically significant negative correlation between annual precipitation and annual potential evaporation. The aggregated data cloud of precipitation versus potential evaporation follows a curve based on the combination of the Budyko-type equation and Bouchet's complementary relationship. Our result suggests that annual precipitation and potential evaporation are not independent when both Budyko's hypothesis and Bouchet's hypothesis are valid. Furthermore, we find that the wet surface evaporation, which is controlled primarily by short wave radiation as defined in Bouchet's hypothesis, exhibits less dependence on precipitation than the potential evaporation. As a result, we suggest that wet surface evaporation is a better representation of energy supply than potential evaporation in the Budyko framework.
Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks
Maca, Petr; Pech, Pavel
2016-01-01
The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875
Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.
Maca, Petr; Pech, Pavel
2016-01-01
The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.
NASA Astrophysics Data System (ADS)
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.
NASA Astrophysics Data System (ADS)
Carmona, Alejandra M.; Sivapalan, Murugesu; Yaeger, Mary A.; Poveda, Germán.
2014-12-01
Patterns of interannual variability of the annual water balance are explored using data from 190 MOPEX catchments across the continental U.S. This analysis has led to the derivation of a quantitative, dimensionless, Budyko-type framework to characterize the observed interannual variability of annual water balances. The resulting model is expressed in terms of a humidity index that measures the competition between water and energy availability at the annual time scale, and a similarity parameter (α) that captures the net effects of other short-term climate features and local landscape characteristics. This application of the model to the 190 study catchments revealed the existence of space-time symmetry between spatial (between-catchment) variability and general trends in the temporal (between-year) variability of the annual water balances. The MOPEX study catchments were classified into eight similar catchment groups on the basis of magnitudes of the similarity parameter α. Interesting regional trends of α across the continental U.S. were brought out through identification of similarities between the spatial positions of the catchment groups with the mapping of distinctive ecoregions that implicitly take into account common climatic and vegetation characteristics. In this context, this study has introduced a deep sense of similarity that is evident in observed space-time variability of water balances that also reflect the codependence and coevolution of climate and landscape properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
IRACproc: IRAC Post-BCD Processing
NASA Astrophysics Data System (ADS)
Schuster, Mike; Marengo, Massimo; Patten, Brian
2012-09-01
IRACproc is a software suite that facilitates the co-addition of dithered or mapped Spitzer/IRAC data to make them ready for further analysis with application to a wide variety of IRAC observing programs. The software runs within PDL, a numeric extension for Perl available from pdl.perl.org, and as stand alone perl scripts. In acting as a wrapper for the Spitzer Science Center's MOPEX software, IRACproc improves the rejection of cosmic rays and other transients in the co-added data. In addition, IRACproc performs (optional) Point Spread Function (PSF) fitting, subtraction, and masking of saturated stars.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
The CAMELS data set: catchment attributes and meteorology for large-sample studies
NASA Astrophysics Data System (ADS)
Addor, Nans; Newman, Andrew J.; Mizukami, Naoki; Clark, Martyn P.
2017-10-01
We present a new data set of attributes for 671 catchments in the contiguous United States (CONUS) minimally impacted by human activities. This complements the daily time series of meteorological forcing and streamflow provided by Newman et al. (2015b). To produce this extension, we synthesized diverse and complementary data sets to describe six main classes of attributes at the catchment scale: topography, climate, streamflow, land cover, soil, and geology. The spatial variations among basins over the CONUS are discussed and compared using a series of maps. The large number of catchments, combined with the diversity of the attributes we extracted, makes this new data set well suited for large-sample studies and comparative hydrology. In comparison to the similar Model Parameter Estimation Experiment (MOPEX) data set, this data set relies on more recent data, it covers a wider range of attributes, and its catchments are more evenly distributed across the CONUS. This study also involves assessments of the limitations of the source data sets used to compute catchment attributes, as well as detailed descriptions of how the attributes were computed. The hydrometeorological time series provided by Newman et al. (2015b, https://doi.org/10.5065/D6MW2F4D) together with the catchment attributes introduced in this paper (https://doi.org/10.5065/D6G73C3Q) constitute the freely available CAMELS data set, which stands for Catchment Attributes and MEteorology for Large-sample Studies.
Catchment-scale groundwater recharge and vegetation water use efficiency
NASA Astrophysics Data System (ADS)
Troch, P. A. A.; Dwivedi, R.; Liu, T.; Meira, A.; Roy, T.; Valdés-Pineda, R.; Durcik, M.; Arciniega, S.; Brena-Naranjo, J. A.
2017-12-01
Precipitation undergoes a two-step partitioning when it falls on the land surface. At the land surface and in the shallow subsurface, rainfall or snowmelt can either runoff as infiltration/saturation excess or quick subsurface flow. The rest will be stored temporarily in the root zone. From the root zone, water can leave the catchment as evapotranspiration or percolate further and recharge deep storage (e.g. fractured bedrock aquifer). Quantifying the average amount of water that recharges deep storage and sustains low flows is extremely challenging, as we lack reliable methods to quantify this flux at the catchment scale. It was recently shown, however, that for semi-arid catchments in Mexico, an index of vegetation water use efficiency, i.e. the Horton index (HI), could predict deep storage dynamics. Here we test this finding using 247 MOPEX catchments across the conterminous US, including energy-limited catchments. Our results show that the observed HI is indeed a reliable predictor of deep storage dynamics in space and time. We further investigate whether the HI can also predict average recharge rates across the conterminous US. We find that the HI can reliably predict the average recharge rate, estimated from the 50th percentile flow of the flow duration curve. Our results compare favorably with estimates of average recharge rates from the US Geological Survey. Previous research has shown that HI can be reliably estimated based on aridity index, mean slope and mean elevation of a catchment (Voepel et al., 2011). We recalibrated Voepel's model and used it to predict the HI for our 247 catchments. We then used these predicted values of the HI to estimate average recharge rates for our catchments, and compared them with those estimated from observed HI. We find that the accuracies of our predictions based on observed and predicted HI are similar. This provides an estimation method of catchment-scale average recharge rates based on easily derived catchment characteristics, such as climate and topography, and free of discharge measurements.
NASA Astrophysics Data System (ADS)
Yan, Hongxiang; Moradkhani, Hamid; Abbaszadeh, Peyman
2017-04-01
Assimilation of satellite soil moisture and streamflow data into hydrologic models using has received increasing attention over the past few years. Currently, these observations are increasingly used to improve the model streamflow and soil moisture predictions. However, the performance of this land data assimilation (DA) system still suffers from two limitations: 1) satellite data scarcity and quality; and 2) particle weight degeneration. In order to overcome these two limitations, we propose two possible solutions in this study. First, the general Gaussian geostatistical approach is proposed to overcome the limitation in the space/time resolution of satellite soil moisture products thus improving their accuracy at uncovered/biased grid cells. Secondly, an evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC, is developed to further reduce weight degeneration and improve the robustness of the land DA system. This study provides a detailed analysis of the joint and separate assimilation of streamflow and satellite soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed EPF-MCMC and the general Gaussian geostatistical approach. Performance is assessed over several basins in the USA selected from Model Parameter Estimation Experiment (MOPEX) and located in different climate regions. The results indicate that: 1) the general Gaussian approach can predict the soil moisture at uncovered grid cells within the expected satellite data quality threshold; 2) assimilation of satellite soil moisture inferred from the general Gaussian model can significantly improve the soil moisture predictions; and 3) in terms of both deterministic and probabilistic measures, the EPF-MCMC can achieve better streamflow predictions. These results recommend that the geostatistical model is a helpful tool to aid the remote sensing technique and the EPF-MCMC is a reliable and effective DA approach in hydrologic applications.
National-Scale Hydrologic Classification & Agricultural Decision Support: A Multi-Scale Approach
NASA Astrophysics Data System (ADS)
Coopersmith, E. J.; Minsker, B.; Sivapalan, M.
2012-12-01
Classification frameworks can help organize catchments exhibiting similarity in hydrologic and climatic terms. Focusing this assessment of "similarity" upon specific hydrologic signatures, in this case the annual regime curve, can facilitate the prediction of hydrologic responses. Agricultural decision-support over a diverse set of catchments throughout the United States depends upon successful modeling of the wetting/drying process without necessitating separate model calibration at every site where such insights are required. To this end, a holistic classification framework is developed to describe both climatic variability (humid vs. arid, winter rainfall vs. summer rainfall) and the draining, storing, and filtering behavior of any catchment, including ungauged or minimally gauged basins. At the national scale, over 400 catchments from the MOPEX database are analyzed to construct the classification system, with over 77% of these catchments ultimately falling into only six clusters. At individual locations, soil moisture models, receiving only rainfall as input, produce correlation values in excess of 0.9 with respect to observed soil moisture measurements. By deploying physical models for predicting soil moisture exclusively from precipitation that are calibrated at gauged locations, overlaying machine learning techniques to improve these estimates, then generalizing the calibration parameters for catchments in a given class, agronomic decision-support becomes available where it is needed rather than only where sensing data are located.lassifications of 428 U.S. catchments on the basis of hydrologic regime data, Coopersmith et al, 2012.
NASA Astrophysics Data System (ADS)
Singh, S.; Abebe, A.; Srivastava, P.; Chaubey, I.
2017-12-01
Evaluation of the influences of individual and coupled oceanic-atmospheric oscillations on streamflow at a regional scale in the United States is the focus of this study. The main climatic oscillations considered in this study are: El Niño Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Atlantic Multidecadal Oscillation (AMO), and North Atlantic Oscillation (NAO). Unimpacted or minimally impacted by water management streamflow data from the Model Parameter Estimation Experiment (MOPEX) were used in this study. Two robust and novel non-parametric tests, namely, the rank based partial least square (PLS) and the Joint Rank Fit (JRFit) procedures were used to identify the individual and coupled effect of oscillations on streamflow across continental U.S. (CONUS), respectively. Moreover, the interactive effects of ENSO with decadal and multidecadal cycles were tested and quantified using the JRFit interaction test. The analysis of ENSO indicated higher streamflows during La Niña phase compared to the El Niño phase in Northwest, Northeast and the lower part of Ohio Valley while the opposite occurs for rest of the climatic regions in US. Two distinct climate regions (Northwest and Southeast) were identified from the PDO analysis where PDO negative phase results in increased streamflow than PDO positive phase. Consistent negative and positive correlated regions around the CONUS were identified for AMO and NAO, respectively. The interaction test of ENSO with decadal and multidecadal oscillations showed that El Niño is modulated by the negative phase of PDO and NAO, and the positive phase of AMO, respectively, in the Upper Midwest. However, La Niña is modulated by the positive phase of AMO and PDO in Ohio Valley and Northeast while in Southeast and the South it is modulated by AMO negative phase. Results of this study will assist water managers to understand the streamflow change patterns across the CONUS at decadal and multi-decadal time scales. The information derived from this study would be helpful for regional water managers in forecasting regional water availability and help them develop drought adaptation and mitigation policies by incorporating information based on the large scale ocean-atmospheric cycles.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
NASA Astrophysics Data System (ADS)
Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois
2015-04-01
Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN performance showed that wet basins are more easily modelled than dry basins. Nash-Sutcliffe (NS) Efficiency criterion was used to evaluate the performance of the models. Test results showed that in 9 of the 12 basins, the mean sub-ensembles performance was better than the one presented by Vos (2013). Furthermore, in 55 of 72 cases (6 NN structures x 12 basins) the mean sub-ensemble performance was better than the best individual performance, and in 10 basins the performance of the mean super-ensemble was better than the best individual super-ensemble member. As well, it was identified that members of ESN and L-ESN sub-ensembles have very similar and good performance values. Regarding the mean super-ensemble performance, we obtained an average gain in performance of 17%, and found that PSS preserves sub-ensemble members from different NN structures, indicating the pertinence of diversity in the super-ensemble. Moreover, it was demonstrated that around 100 predictors from the different structures are enough to optimize the super-ensemble. Although sub-ensembles of FFNN-SCE showed unstable performances, FFNN-SCE members were picked-up several times in the final predictor selection. References Anctil, F., M. Filion, and J. Tournebize (2009). "A neural network experiment on the simulation of daily nitrate-nitrogen and suspended sediment fluxes from a small agricultural catchment". In: Ecol. Model. 220.6, pp. 879-887. Arora, V. K. (2002). "The use of the aridity index to assess climate change effect on annual runoff". In: J. Hydrol. 265.164, pp. 164 -177 . Brochero, D., F. Anctil, and C. Gagn'e (2011). "Simplifying a hydrological ensemble prediction system with a backward greedy selection of members Part 1: Optimization criteria". In: Hydrol. Earth Syst. Sci. 15.11, pp. 3307-3325. Duan, Q., J. Schaake, V. Andr'eassian, S. Franks, G. Goteti, H. Gupta, Y. Gusev, F. Habets, A. Hall, L. Hay, T. Hogue, M. Huang, G. Leavesley, X. Liang, O. Nasonova, J. Noilhan, L. Oudin, S. Sorooshian, T. Wagener, and E. Wood (2006). "Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops". In: J. Hydrol. 320.12, pp. 3-17. Duan, Q., V. Gupta, and S. Sorooshian (1993). "Shuffled complex evolution approach for effective and efficient global minimization". In: J. Optimiz. Theory App. 76.3, pp. 501-521. Hagan, M. T., H. B. Demuth, and M. Beale (1996). Neural network design . 1st ed. PWS Publishing Co., p. 730. Kuncheva, L. I. (2004). Combining Pattern Classifiers: Methods and Algorithms . Wiley-Interscience, p. 350. Leontaritis, I. and S. Billings (1985). "Input-output parametric models for non-linear systems Part I: deterministic non-linear systems". In: International Journal of Control 41.2, pp. 303-328. Lukosevicius, M. and H. Jaeger (2009). "Reservoir computing approaches to recurrent neural network training". In: Computer Science Review 3.3, pp. 127-149. McKee, T., N. Doesken, and J. Kleist (1993). The Relationship of Drought Frequency and Duration to Time Scales . In: Eighth Conference on Applied Climatology. Vos, N. J. de (2013). "Echo state networks as an alternative to traditional artificial neural networks in rainfall-runoff modelling". In: Hydrol. Earth Syst. Sci. 17.1, pp. 253-267. Weins, T., R. Burton, G. Schoenau, and D. Bitner (2008). Recursive Generalized Neural Networks (RGNN) for the Modeling of a Load Sensing Pump. In: ASME Joint Conference on Fluid Power, Transmission and Control.
NASA Astrophysics Data System (ADS)
Brochero, Darwin; Anctil, Francois; Gagné, Christian; López, Karol
2013-04-01
In this study, we addressed the application of Artificial Neural Networks (ANN) in the context of Hydrological Ensemble Prediction Systems (HEPS). Such systems have become popular in the past years as a tool to include the forecast uncertainty in the decision making process. HEPS considers fundamentally the uncertainty cascade model [4] for uncertainty representation. Analogously, the machine learning community has proposed models of multiple classifier systems that take into account the variability in datasets, input space, model structures, and parametric configuration [3]. This approach is based primarily on the well-known "no free lunch theorem" [1]. Consequently, we propose a framework based on two separate but complementary topics: data stratification and input variable selection (IVS). Thus, we promote an ANN prediction stack in which each predictor is trained based on input spaces defined by the IVS application on different stratified sub-samples. All this, added to the inherent variability of classical ANN optimization, leads us to our ultimate goal: diversity in the prediction, defined as the complementarity of the individual predictors. The stratification application on the 12 basins used in this study, which originate from the second and third workshop of the MOPEX project [2], shows that the informativeness of the data is far more important than the quantity used for ANN training. Additionally, the input space variability leads to ANN stacks that outperform an ANN stack model trained with 100% of the available information but with a random selection of dataset used in the early stopping method (scenario R100P). The results show that from a deterministic view, the main advantage focuses on the efficient selection of the training information, which is an equally important concept for the calibration of conceptual hydrological models. On the other hand, the diversity achieved is reflected in a substantial improvement in the scores that define the probabilistic quality of the HEPS. Except one basin that shows an atypical behaviour, and two other basins that represent the difficulty of prediction in semiarid areas, the average gain obtained with the new scheme relative to the R100P scenario is around 8%, 134%, 72%, and 69% for the mean CRPS, the mean ignorance score, the MSE evaluated on the reliability diagram, and the delta ratio respectively. Note that in all cases, the CRPS is less than the MAE, which indicates that the ensemble of neural networks performs better when taken as a whole than when aggregated in a single averaged predictor. Finally, we consider appropriate to complement the proposed methodology in two fronts: one deterministic, in which prediction could come from a Bayesian combination, and the second probabilistic, in which scores optimization could be based on an "overproduce and select" process. Also, in the case of the basins in semiarid areas, the results found by Vos [5] with echo state networks using the same database analysed in this study, leads us to consider the need to include various structures in the ANN stack. References [1] Corne, D. W. and Knowles, J. D.: No free lunch and free leftovers theorems for multiobjective optimisation problems. in Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization, Springer-Verlag, 327-341, 2003. [2] Duan, Q.; Schaake, J.; Andréassian, V.; Franks, S.; Goteti, G.; Gupta, H.; Gusev, Y.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T. and Wood, E.: Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops. J. Hydrol., 320, 3-17, 2006. [3] Kuncheva, L. I.: Combining Pattern Classifiers: Methods and Algorithms, Wiley-Interscience, 2004. [4] Pappenberger, F., Beven, K. J., Hunter, N. M., Bates, P. D., Gouweleeuw, B. T., Thielen, J., and de Roo, A. P. J.: Cascading model uncertainty from medium range weather forecasts (10 days) through a rainfall-runoff model to flood inundation predictions within the European Flood Forecasting System (EFFS), Hydrol. Earth Syst. Sci., 9, 381-393, 2005. [5] de Vos, N. J.: Reservoir computing as an alternative to traditional artificial neural networks in rainfall-runoff modelling Hydrol. Earth Syst. Sci. Discuss., 9, 6101-6134, 2012.
A Four-parameter Budyko Equation for Mean Annual Water Balance
NASA Astrophysics Data System (ADS)
Tang, Y.; Wang, D.
2016-12-01
In this study, a four-parameter Budyko equation for long-term water balance at watershed scale is derived based on the proportionality relationships of the two-stage partitioning of precipitation. The four-parameter Budyko equation provides a practical solution to balance model simplicity and representation of dominated hydrologic processes. Under the four-parameter Budyko framework, the key hydrologic processes related to the lower bound of Budyko curve are determined, that is, the lower bound is corresponding to the situation when surface runoff and initial evaporation not competing with base flow generation are zero. The derived model is applied to 166 MOPEX watersheds in United States, and the dominant controlling factors on each parameter are determined. Then, four statistical models are proposed to predict the four model parameters based on the dominant controlling factors, e.g., saturated hydraulic conductivity, fraction of sand, time period between two storms, watershed slope, and Normalized Difference Vegetation Index. This study shows a potential application of the four-parameter Budyko equation to constrain land-surface parameterizations in ungauged watersheds or general circulation models.
Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs
NASA Astrophysics Data System (ADS)
Jayathilake, D. I.; Smith, T. J.
2017-12-01
Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.
NASA Astrophysics Data System (ADS)
Reaver, N.; Kaplan, D. A.; Jawitz, J. W.
2017-12-01
The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.
On Budyko curve as a consequence of climate-soil-vegetation equilibrium hypothesis
NASA Astrophysics Data System (ADS)
Pande, S.
2012-04-01
A hypothesis that Budyko curve is a consequence of stable equilibriums of climate-soil-vegetation co-evolution is tested at biome scale. We assume that i) distribution of vegetation, soil and climate within a biome is a distribution of equilibriums of similar soil-vegetation dynamics and that this dynamics is different across different biomes and ii) soil and vegetation are in dynamic equilibrium with climate while in static equilibrium with each other. In order to test the hypothesis, a two stage regression is considered using MOPEX/Hydrologic Synthesis Project dataset for basins in eastern United States. In the first stage, multivariate regression (Seemingly Unrelated Regression) is performed for each biome with soil (estimated porosity and slope of soil water retention curve) and vegetation characteristics (5-week NDVI gradient) as dependent variables and aridity index, vegetation and soil characteristics as independent variables for respective dependent variables. The regression residuals of the first stage along with aridity index then serve as second stage independent variables while actual vaporization to precipitation ratio (vapor index) serving as dependent variable. Insignificance, if revealed, of a first stage parameter allows us to reject the role of corresponding soil or vegetation characteristics in the co-evolution hypothesis. Meanwhile the significance of second stage regression parameter corresponding to a first stage residual allow us to reject the hypothesis that Budyko curve is a locus "solely" of climate-soil-vegetation co-evolution equilibrium points. Results suggest lack of evidence for soil-vegetation co-evolution in Prairies and Mixed/SouthEast Forests (unlike in Deciduous Forests) though climate plays a dominant role in explaining within biome soil and vegetation characteristics across all the biomes. Preliminary results indicate absence of effects beyond climate-soil-vegetation co-evolution in explaining the ratio of annual total minimum monthly flows to precipitation in Deciduous Forests though other three biome types show presence of effects beyond co-evolutionary. Such an analysis can yield insights into the nature of hydrologic change when assessed along the Budyko curve as well as non co-evolutionary effects such as anthropogenic effects on basin scale annual water balances.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.; Gupta, H. V.; Xu, C.
2016-04-01
The flow duration curve is a signature catchment characteristic that depicts graphically the relationship between the exceedance probability of streamflow and its magnitude. This curve is relatively easy to create and interpret, and is used widely for hydrologic analysis, water quality management, and the design of hydroelectric power plants (among others). Several mathematical expressions have been proposed to mimic the FDC. Yet, these efforts have not been particularly successful, in large part because available functions are not flexible enough to portray accurately the functional shape of the FDC for a large range of catchments and contrasting hydrologic behaviors. Here, we extend the work of Vrugt and Sadegh (2013) and introduce several commonly used models of the soil water characteristic as new class of closed-form parametric expressions for the flow duration curve. These soil water retention functions are relatively simple to use, contain between two to three parameters, and mimic closely the empirical FDCs of 430 catchments of the MOPEX data set. We then relate the calibrated parameter values of these models to physical and climatological characteristics of the watershed using multivariate linear regression analysis, and evaluate the regionalization potential of our proposed models against those of the literature. If quality of fit is of main importance then the 3-parameter van Genuchten model is preferred, whereas the 2-parameter lognormal, 3-parameter GEV and generalized Pareto models show greater promise for regionalization.
Temporal variation and scaling of parameters for a monthly hydrologic model
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang
2018-03-01
The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiali; Li, Hongyi; Leung, Lai-Yung R.
This paper presents the results of a data based comparative study of several hundred catchments across continental United States belonging to the MOPEX dataset, which systematically explored the connection between the flood frequency curve and measures of mean annual water balance. Two different measures of mean annual water balance are used: (i) a climatic aridity index, AI, which is a measure of the competition between water and energy availability at the annual scale; and, (ii) baseflow index, BFI, the ratio of slow runoff to total runoff also at the annual time scale, reflecting the role of geology, soils, topography andmore » vegetation. The data analyses showed that the aridity index, AI, has a first order control on both the mean and Cv of annual maximum floods. While mean annual flood decreases with increasing aridity, Cv increases with increasing aridity. BFI appeared to be a second order control on the magnitude and shape of the flood frequency curve. Higher BFI, meaning more subsurface flow and less surface flow leads to a decrease of mean annual flood whereas lower BFI leads to accumulation of soil moisture and increased flood magnitudes that arise from many events acting together. The results presented in this paper provide innovative means to delineate homogeneous regions within which the flood frequency curves can be assumed to be functionally similar. At another level, understanding the connection between annual water balance and flood frequency will be another building block towards developing comprehensive understanding of catchment runoff behavior in a holistic way.« less
Gloster, Andrew T; Meyer, Andrea H; Witthauer, Cornelia; Lieb, Roselind; Mata, Jutta
2017-09-01
People often overestimate how strongly behaviours and experiences are related. This memory-experience gap might have important implications for health care settings, which often require people to estimate associations, such as "my mood is better when I exercise". This study examines how subjective correlation estimates between health behaviours and experiences relate to calculated correlations from online reports and whether subjective estimates are associated with engagement in actual health behaviour. Seven-month online study on physical activity, sleep, affect and stress, with 61 online assessments. University students (N = 168) retrospectively estimated correlations between physical activity, sleep, positive affect and stress over the seven-month study period. Correlations between experiences and behaviours (online data) were small (r = -.12-.14), estimated correlations moderate (r = -.35-.24). Correspondence between calculated and estimated correlations was low. Importantly, estimated correlations of physical activity with stress, positive affect and sleep were associated with actual engagement in physical activity. Estimation accuracy of relations between health behaviours and experiences is low. However, association estimates could be an important predictor of actual health behaviours. This study identifies and quantifies estimation inaccuracies in health behaviours and points towards potential systematic biases in health settings, which might seriously impair intervention efficacy.
Conditions under which Arousal Does and Does Not Elevate Height Estimates
Storbeck, Justin; Stefanucci, Jeanine K.
2014-01-01
We present a series of experiments that explore the boundary conditions for how emotional arousal influences height estimates. Four experiments are presented, which investigated the influence of context, situation-relevance, intensity, and attribution of arousal on height estimates. In Experiment 1, we manipulated the environmental context to signal either danger (viewing a height from above) or safety (viewing a height from below). High arousal only increased height estimates made from above. In Experiment 2, two arousal inductions were used that contained either 1) height-relevant arousing images or 2) height-irrelevant arousing images. Regardless of theme, arousal increased height estimates compared to a neutral group. In Experiment 3, arousal intensity was manipulated by inserting an intermediate or long delay between the induction and height estimates. A brief, but not a long, delay from the arousal induction served to increase height estimates. In Experiment 4, an attribution manipulation was included, and those participants who were made aware of the source of their arousal reduced their height estimates compared to participants who received no attribution instructions. Thus, arousal that is attributed to its true source is discounted from feelings elicited by the height, thereby reducing height estimates. Overall, we suggest that misattributed, embodied arousal is used as a cue when estimating heights from above that can lead to overestimation. PMID:24699393
Šimůnek, Jirka; Nimmo, John R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.
Cost analysis of life sciences experiments and subsystems. [to be carried in the Spacelab
NASA Technical Reports Server (NTRS)
Yakut, M. M.
1975-01-01
Cost estimates for experiments and subsystems flown in the Spacelab were established. Ten experiments were cost analyzed. Estimated cost varied from $650,000 for the hardware development of the SPE water electrolysis experiment to $78,500,000 for the development and operation of a representative life sciences laboratory program. The cost of subsystems for thermal, atmospheric and trace contaminants control of the Spacelab internal atmosphere was also estimated. Subsystem cost estimates were based on the utilization of existing components developed in previous space programs whenever necessary.
Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine
Herdtweck, Christian; Wallraven, Christian
2013-01-01
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. PMID:24349073
Aging persons' estimates of vehicular motion.
Schiff, W; Oldak, R; Shah, V
1992-12-01
Estimated arrival times of moving autos were examined in relation to viewer age, gender, motion trajectory, and velocity. Direct push-button judgments were compared with verbal estimates derived from velocity and distance, which were based on assumptions that perceivers compute arrival time from perceived distance and velocity. Experiment 1 showed that direct estimates of younger Ss were most accurate. Older women made the shortest (highly cautious) estimates of when cars would arrive. Verbal estimates were much lower than direct estimates, with little correlation between them. Experiment 2 extended target distances and velocities of targets, with the results replicating the main findings of Experiment 1. Judgment accuracy increased with target velocity, and verbal estimates were again poorer estimates of arrival time than direct ones, with different patterns of findings. Using verbal estimates to approximate judgments in traffic situations appears questionable.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
NASA Technical Reports Server (NTRS)
1981-01-01
The performance of the technology exhibited significant proportion estimation errors, specifically, high mean error in both corn and soybeans area estimation. The data systems, technical approaches, and data assessment of the pilot experiment were reviewed. Results of proportion estimations procedure performance evaluations, and sensitivity evaluations are presented. The role of the pilot experiment in foreign technology development is discussed.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
NASA Technical Reports Server (NTRS)
Parker, D. E.; Wood, D. L.; Gulledge, W. L.; Goodrich, R. L.
1979-01-01
Two types of experiments concerning the estimated magnitude of self-motion during exposure to linear oscillation on a parallel swing are described in this paper. Experiment I examined changes in magnitude estimation as a function of variation of the subject's head orientation, and Experiments II a, II b, and II c assessed changes in magnitude estimation performance following exposure to sustained, 'intense' linear oscillation (fatigue-inducting stimulation). The subjects' performance was summarized employing Stevens' power law R = k x S to the nth, where R is perceived self-motion magnitude, k is a constant, S is amplitude of linear oscillation, and n is an exponent). The results of Experiment I indicated that the exponents, n, for the magnitude estimation functions varied with head orientation and were greatest when the head was oriented 135 deg off the vertical. In Experiments II a-c, the magnitude estimation function exponents were increased following fatigue. Both types of experiments suggest ways in which the vestibular system's contribution to a spatial orientation perceptual system may vary. This variability may be a contributing factor to the development of pilot/astronaut disorientation and may also be implicated in the occurrence of motion sickness.
Training specificity and transfer in time and distance estimation.
Healy, Alice F; Tack, Lindsay Anderson; Schneider, Vivian I; Barshi, Immanuel
2015-07-01
Learning is often specific to the conditions of training, making it important to identify which aspects of the testing environment are crucial to be matched in the training environment. In the present study, we examined training specificity in time and distance estimation tasks that differed only in the focus of processing (FOP). External spatial cues were provided for the distance estimation task and for the time estimation task in one condition, but not in another. The presence of a concurrent alphabet secondary task was manipulated during training and testing in all estimation conditions in Experiment 1. For distance as well as for time estimation in both conditions, training of the primary estimation task was found to be specific to the presence of the secondary task. In Experiments 2 and 3, we examined transfer between one estimation task and another, with no secondary task in either case. When all conditions were equal aside from the FOP instructions, including the presence of external spatial cues, Experiment 2 showed "transfer" between tasks, suggesting that training might not be specific to the FOP. When the external spatial cues were removed from the time estimation task, Experiment 3 showed no transfer between time and distance estimations, suggesting that external task cues influenced the procedures used in the estimation tasks.
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
NASA Astrophysics Data System (ADS)
Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den
2016-08-01
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Observer variability in estimating numbers: An experiment
Erwin, R.M.
1982-01-01
Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.
The moon illusion: I. How high is the sky?
Baird, J C; Wagner, M
1982-09-01
The most common explanations of the moon illusion assume that the moon is seen at a specific distance in the sky, which is perceived as a definite surface. A decrease in the apparent distance to the sky with increasing elevation presumably leads to a corresponding decrease in apparent size. In Experiment 1 observers (N = 24) gave magnitude estimates of the distance to the night sky at different elevations. The results did not support the flattened-dome hypothesis. In Experiment 2 observers (N = 20) gave magnitude estimates of the distance to the sky at points around a 360 degrees circle just above the horizon. The results were consistent with those of Experiment 1, and in addition, estimates were highly correlated with the physical distances of buildings at the horizon. In a third, control experiment, observers (N = 20) gave magnitude estimates of the distances of buildings at the horizon. A power function fit the relation between estimated and physical distance (exponent = 1.17) as well as the relation between estimates of the sky points above the buildings (Experiment 2) and estimates of building distances (exponent = .46). Taken together, the results disconfirm all theories that attribute the moon illusion to a "sky illusion" of the sort exemplified by the flattened-dome hypothesis.
ERIC Educational Resources Information Center
Ramenzoni, Veronica; Riley, Michael A.; Davis, Tehran; Shockley, Kevin; Armstrong, Rachel
2008-01-01
Three experiments investigated the ability to perceive the maximum height to which another actor could jump to reach an object. Experiment 1 determined the accuracy of estimates for another actor's maximal reach-with-jump height and compared these estimates to estimates of the actor's standing maximal reaching height and to estimates of the…
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
Birnbaum; Zimmermann
1998-05-01
Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments
Erkoc, Ali; Emiroglu, Esra
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738
Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.
Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.
Estimation of duration and mental workload at differing times of day by males and females
NASA Technical Reports Server (NTRS)
Hancock, P. A.; Rodenburg, G. J.; Mathews, W. D.; Vercruyssen, M.
1988-01-01
Two experiments are reported which investigated whether male and female operator duration estimation and subjective workload followed conventional circadian fluctuation. In the first experiment, twenty-four subjects performed a filled time-estimation task in a constant blacked-out, noise-reduced environment at 0800, 1200, 1600, and 2000 h. In the second experiment, twelve subjects performed an unfilled time estimation task in similar conditions at 0900, 1400, and 1900 h. At the termination of all experimental sessions, participants completed the NASA TLX workload assessment questionnaire as a measure of perceived mental workload. Results indicated that while physiological response followed an expected pattern, estimations of duration and subjective perception of workload showed no significant effects for time-of-day. In each of the experiments, however, there were significant differences in durational estimates and mental workload response depending upon the gender of the participant. Results are taken to support the assertion that subjective workload is responsive largely to task-related factors and indicates the important differences that may be expected due to operator gender.
A Role for Memory in Prospective Timing informs Timing in Prospective Memory
Waldum, Emily R; Sahakyan, Lili
2014-01-01
Time-based prospective memory (TBPM) tasks require the estimation of time in passing – known as prospective timing. Prospective timing is said to depend on an attentionally-driven internal clock mechanism, and is thought to be unaffected by memory for interval information (for reviews see, Block, Hancock, & Zakay, 2010; Block & Zakay, 1997). A prospective timing task that required a verbal estimate following the entire interval (Experiment 1) and a TBPM task that required production of a target response during the interval (Experiment 2) were used to test an alternative view that episodic memory does influence prospective timing. In both experiments, participants performed an ongoing lexical decision task of fixed duration while a varying number of songs were played in the background. Experiment 1 results revealed that verbal time estimates became longer the more songs participants remembered from the interval, suggesting that memory for interval information influences prospective time estimates. In Experiment 2, participants who were asked to perform the TBPM task without the aid of an external clock made their target responses earlier as the number of songs increased, indicating that prospective estimates of elapsed time increased as more songs were experienced. For participants who had access to a clock, changes in clock-checking coincided with the occurrence of song boundaries, indicating that participants used both song information and clock information to estimate time. Finally, ongoing task performance and verbal reports in both experiments further substantiate a role for episodic memory in prospective timing. PMID:22984950
Estimating willingness to accept using paired comparison choice experiments: tests of robustness
David C. Kingsley; Thomas C. Brown
2013-01-01
Paired comparison (PC) choice experiments offer researchers and policy-makers an alternative nonmarket valuation method particularly apt when a ranking of the public's priorities across policy alternatives is paramount. Similar to contingent valuation, PC choice experiments estimate the total value associated with a specific environmental good or service. Similar...
A rotor-aerodynamics-based wind estimation method using a quadrotor
NASA Astrophysics Data System (ADS)
Song, Yao; Luo, Bing; Meng, Qing-Hao
2018-02-01
Attempts to estimate horizontal wind by the quadrotor are reviewed. Wind estimations are realized by utilizing the quadrotor’s thrust change, which is caused by the wind’s effect on the rotors. The basis of the wind estimation method is the aerodynamic formula for the rotor’s thrust, which is verified and calibrated by experiments. A hardware-in-the-loop simulation (HILS) system was built as a testbed; its dynamic model and control structure are demonstrated. Verification experiments on the HILS system proved that the wind estimation method was effective.
Anchoring and Estimation of Alcohol Consumption: Implications for Social Norm Interventions
ERIC Educational Resources Information Center
Lombardi, Megan M.; Choplin, Jessica M.
2010-01-01
Three experiments investigated the impact of anchors on students' estimates of personal alcohol consumption to better understand the role that this form of bias might have in social norm intervention programs. Experiments I and II found that estimates of consumption were susceptible to anchoring effects when an open-answer and a scale-response…
Colloid-Facilitated Transport of 137Cs in Fracture-Fill Material. Experiments and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dittrich, Timothy M.; Reimus, Paul William
2015-10-29
In this study, we demonstrate how a combination of batch sorption/desorption experiments and column transport experiments were used to effectively parameterize a model describing the colloid-facilitated transport of Cs in the Grimsel granodiorite/FFM system. Cs partition coefficient estimates onto both the colloids and the stationary media obtained from the batch experiments were used as initial estimates of partition coefficients in the column experiments, and then the column experiment results were used to obtain refined estimates of the number of different sorption sites and the adsorption and desorption rate constants of the sites. The desorption portion of the column breakthrough curvesmore » highlighted the importance of accounting for adsorption-desorption hysteresis (or a very nonlinear adsorption isotherm) of the Cs on the FFM in the model, and this portion of the breakthrough curves also dictated that there be at least two different types of sorption sites on the FFM. In the end, the two-site model parameters estimated from the column experiments provided excellent matches to the batch adsorption/desorption data, which provided a measure of assurance in the validity of the model.« less
Sieradzka, Dominika; Power, Robert A; Freeman, Daniel; Cardno, Alastair G; Dudbridge, Frank; Ronald, Angelica
2015-09-01
Occurrence of psychotic experiences is common amongst adolescents in the general population. Twin studies suggest that a third to a half of variance in adolescent psychotic experiences is explained by genetic influences. Here we test the extent to which common genetic variants account for some of the twin-based heritability. Psychotic experiences were assessed with the Specific Psychotic Experiences Questionnaire in a community sample of 2152 16-year-olds. Self-reported measures of Paranoia, Hallucinations, Cognitive Disorganization, Grandiosity, Anhedonia, and Parent-rated Negative Symptoms were obtained. Estimates of SNP heritability were derived and compared to the twin heritability estimates from the same sample. Three approaches to genome-wide restricted maximum likelihood (GREML) analyses were compared: (1) standard GREML performed on full genome-wide data; (2) GREML stratified by minor allele frequency (MAF); and (3) GREML performed on pruned data. The standard GREML revealed a significant SNP heritability of 20 % for Anhedonia (SE = 0.12; p < 0.046) and an estimate of 19 % for Cognitive Disorganization, which was close to significant (SE = 0.13; p < 0.059). Grandiosity and Paranoia showed modest SNP heritability estimates (17 %; SE = 0.13 and 14 %; SE = 0.13, respectively, both n.s.), and zero estimates were found for Hallucinations and Negative Symptoms. The estimates for Anhedonia, Cognitive Disorganization and Grandiosity accounted for approximately half the previously reported twin heritability. SNP heritability estimates from the MAF-stratified approach were mostly consistent with the standard estimates and offered additional information about the distribution of heritability across the MAF range of the SNPs. In contrast, the estimates derived from the pruned data were for the most part not consistent with the other two approaches. It is likely that the difference seen in the pruned estimates was driven by the loss of tagged causal variants, an issue fundamental to this approach. The current results suggest that common genetic variants play a role in the etiology of some adolescent psychotic experiences, however further research on larger samples is desired and the use of MAF-stratified approach recommended.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Greening, L; Dollinger, S J; Pitz, G
1996-02-01
Elevated risk judgments for negative life events have been linked to personal experience with events. We tested the hypothesis that cognitive heuristics are the underlying cognitive mechanism for this relation. The availability (i.e., memory for incidents) and simulation (i.e., imagery) heuristics were evaluated as possible mediators for the relation between personal experience and risk estimates for fatal weather events. Adolescents who had experienced weather disasters estimated their personal risk for weather events. Support was obtained for the simulation heuristic (imagery) as a mediator for the relation. Availability for lightning disaster experience was also found to be a mediator for the relation between personal lightning disaster experience and risk estimate for future events. The implications for risk perception research are discussed.
Test Experience Effects in Longitudinal Comparisons of Adult Cognitive Functioning
ERIC Educational Resources Information Center
Salthouse, Timothy
2015-01-01
It is widely recognized that experience with cognitive tests can influence estimates of cognitive change. Prior research has estimated experience effects at the level of groups by comparing the performance of a group of participants tested for the second time with the performance of a different group of participants at the same age tested for the…
The burden of secrecy? No effect on hill slant estimation and beanbag throwing.
Pecher, Diane; van Mierlo, Heleen; Cañal-Bruland, Rouwen; Zeelenberg, René
2015-08-01
Slepian, Masicampo, Toosi, and Ambady (2012, Experiment 1) reported that participants who recalled a big secret estimated a hill as steeper than participants who recalled a small secret. This finding was interpreted as evidence that secrets are experienced as physical burdens. In 2 experiments, we tried to replicate this finding, but, despite larger power, did not find a difference in slant estimates between participants who recalled a big secret and those who recalled a small secret. This finding was further corroborated by a meta-analysis that included 8 published data sets of exact replications, which indicates that thinking of a big secret does not affect hill slant estimation. In a third experiment, we also failed to replicate the effect of recalling a secret on throwing a beanbag at a target (Slepian et al., 2012, Experiment 2). Together, our findings question the robustness of the original empirical findings. (c) 2015 APA, all rights reserved).
Estimation of the Dose and Dose Rate Effectiveness Factor
NASA Technical Reports Server (NTRS)
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger
2018-05-10
Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.
Large Area Crop Inventory Experiment (LACIE). Phase 2 evaluation report
NASA Technical Reports Server (NTRS)
1977-01-01
Documentation of the activities of the Large Area Crop Inventory Experiment during the 1976 Northern Hemisphere crop year is presented. A brief overview of the experiment is included as well as phase two area, yield, and production estimates for the United States Great Plains, Canada, and the Union of Soviet Socialist Republics spring winter wheat regions. The accuracies of these estimates are compared with independent government estimates. Accuracy assessment of the United States Great Plains yardstick region based on a through blind sight analysis is given, and reasons for variations in estimating performance are discussed. Other phase two technical activities including operations, exploratory analysis, reporting, methods of assessment, phase three and advanced system design, technical issues, and developmental activities are also included.
Non-contact estimation of heart rate and oxygen saturation using ambient light.
Bal, Ufuk
2015-01-01
We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings.
Non-contact estimation of heart rate and oxygen saturation using ambient light
Bal, Ufuk
2014-01-01
We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings PMID:25657877
Caes, Line; Goubert, Liesbet; Devos, Patricia; Verlooy, Joris; Benoit, Yves; Vervoort, Tine
2017-02-01
Caregivers’ pain estimations may have important implications for pediatric pain management decisions. Affective responses elicited by facing the child in pain are considered key in understanding caregivers’ estimations of pediatric pain experiences. Theory suggests differential influences of sympathy versus personal distress on pain estimations; yet empirical evidence on the impact of caregivers’ feelings of sympathy versus distress upon estimations of pediatric pain experiences is lacking. The current study explored the role of caregiver distress versus sympathy in understanding caregivers’ pain estimates of the child’s pain experience. Using a prospective design in 31 children undergoing consecutive lumbar punctures and/or bone marrow aspirations at Ghent University Hospital, caregivers’ (i.e., parents, physicians, nurses, and child life specialists) distress and sympathy were assessed before each procedure; estimates of child pain were obtained immediately following each procedure. Results indicated that the child’s level of pain behavior in anticipation of the procedure had a strong influence on all caregivers’ pain estimations. Beyond the impact of child pain behavior, personal distress explained parental and physician’s estimates of child pain, but not pain estimates of nurses and child life specialists. Specifically, higher level of parental and physician’s distress was related to higher child pain estimates. Caregiver sympathy was not associated with pain estimations. The current findings highlight the important role of caregivers’ felt personal distress when faced with child pain, rather than sympathy, in influencing their pain estimates. Potential implications for pain management are discussed.
ERIC Educational Resources Information Center
Teachman, Jay D.
1995-01-01
Argues that data on siblings provide a way to account for the impact of unmeasured, omitted variables on relationships of interest because families form a sort of natural experiment, with similar experiences and common genetic heritage. Proposes a latent-variable structural equation approach to the problem, which provides estimates of both within-…
Irrigation water demand: A meta-analysis of price elasticities
NASA Astrophysics Data System (ADS)
Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.
2006-01-01
Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
The role of experience in location estimation: Target distributions shift location memory biases.
Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P
2010-04-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.
Gene–Environment Correlation: Difficulties and a Natural Experiment–Based Strategy
Li, Jiang; Liu, Hexuan; Guo, Guang
2013-01-01
Objectives. We explored how gene–environment correlations can result in endogenous models, how natural experiments can protect against this threat, and if unbiased estimates from natural experiments are generalizable to other contexts. Methods. We compared a natural experiment, the College Roommate Study, which measured genes and behaviors of college students and their randomly assigned roommates in a southern public university, with observational data from the National Longitudinal Study of Adolescent Health in 2008. We predicted exposure to exercising peers using genetic markers and estimated environmental effects on alcohol consumption. A mixed-linear model estimated an alcohol consumption variance that was attributable to genetic markers and across peer environments. Results. Peer exercise environment was associated with respondent genotype in observational data, but not in the natural experiment. The effects of peer drinking and presence of a general gene–environment interaction were similar between data sets. Conclusions. Natural experiments, like random roommate assignment, could protect against potential bias introduced by gene–environment correlations. When combined with representative observational data, unbiased and generalizable causal effects could be estimated. PMID:23927502
Costanza-Robinson, Molly S; Zheng, Zheng; Henry, Eric J; Estabrook, Benjamin D; Littlefield, Malcolm H
2012-10-16
Surfactant miscible-displacement experiments represent a conventional means of estimating air-water interfacial area (A(I)) in unsaturated porous media. However, changes in surface tension during the experiment can potentially induce unsaturated flow, thereby altering interfacial areas and violating several fundamental method assumptions, including that of steady-state flow. In this work, the magnitude of surfactant-induced flow was quantified by monitoring moisture content and perturbations to effluent flow rate during miscible-displacement experiments conducted using a range of surfactant concentrations. For systems initially at 83% moisture saturation (S(W)), decreases of 18-43% S(W) occurred following surfactant introduction, with the magnitude and rate of drainage inversely related to the surface tension of the surfactant solution. Drainage induced by 0.1 mM sodium dodecyl benzene sulfonate, commonly used for A(I) estimation, resulted in effluent flow rate increases of up to 27% above steady-state conditions and is estimated to more than double the interfacial area over the course of the experiment. Depending on the surfactant concentration and the moisture content used to describe the system, A(I) estimates varied more than 3-fold. The magnitude of surfactant-induced flow is considerably larger than previously recognized and casts doubt on the reliability of A(I) estimation by surfactant miscible-displacement.
Inference from single occasion capture experiments using genetic markers.
Hettiarachchige, Chathurika K H; Huggins, Richard M
2018-05-01
Accurate estimation of the size of animal populations is an important task in ecological science. Recent advances in the field of molecular genetics researches allow the use of genetic data to estimate the size of a population from a single capture occasion rather than repeated occasions as in the usual capture-recapture experiments. Estimating the population size using genetic data also has sometimes led to estimates that differ markedly from each other and also from classical capture-recapture estimates. Here, we develop a closed form estimator that uses genetic information to estimate the size of a population consisting of mothers and daughters, focusing on estimating the number of mothers, using data from a single sample. We demonstrate the estimator is consistent and propose a parametric bootstrap to estimate the standard errors. The estimator is evaluated in a simulation study and applied to real data. We also consider maximum likelihood in this setting and discover problems that preclude its general use. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The estimation method on diffusion spot energy concentration of the detection system
NASA Astrophysics Data System (ADS)
Gao, Wei; Song, Zongxi; Liu, Feng; Dan, Lijun; Sun, Zhonghan; Du, Yunfei
2016-09-01
We propose a method to estimate the diffusion spot energy of the detection system. We do outdoor observation experiments in Xinglong Observatory, by using a detection system which diffusion spot energy concentration is estimated (the correlation coefficient is approximate 0.9926).The aperture of system is 300mm and limiting magnitude of system is 14.15Mv. Observation experiments show that the highest detecting magnitude of estimated system is 13.96Mv, and the average detecting magnitude of estimated system is about 13.5Mv. The results indicate that this method can be used to evaluate the energy diffusion spot concentration level of detection system efficiently.
The Martian: Examining Human Physical Judgments across Virtual Gravity Fields.
Ye, Tian; Qi, Siyuan; Kubricht, James; Zhu, Yixin; Lu, Hongjing; Zhu, Song-Chun
2017-04-01
This paper examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball to hit a target, trigger a ball to hit a target, predict the landing location of a projectile, and estimate the flight duration of a projectile. The first two experiments compared human behavior in the virtual environment with real-world performance reported in the literature. The last two experiments aimed to test the human ability to adapt to novel gravity fields by measuring their performance in trajectory prediction and time estimation tasks. The experiment results show that: 1) based on brief observation of a projectile's initial trajectory, humans are accurate at predicting the landing location even under novel gravity fields, and 2) humans' time estimation in a familiar earth environment fluctuates around the ground truth flight duration, although the time estimation in unknown gravity fields indicates a bias toward earth's gravity.
Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age
Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik
2015-01-01
Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Wendell R. Haag
2009-01-01
There may be bias associated with markârecapture experiments used to estimate age and growth of freshwater mussels. Using subsets of a markârecapture dataset for Quadrula pustulosa, I examined how age and growth parameter estimates are affected by (i) the range and skew of the data and (ii) growth reduction due to handling. I compared predictions...
Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav
2018-02-13
Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
NASA Astrophysics Data System (ADS)
Marik, Thomas; Levin, Ingeborg
1996-09-01
Methane emission from livestock and agricultural wastes contribute globally more than 30% to the anthropogenic atmospheric methane source. Estimates of this number have been derived from respiration chamber experiments. We determined methane emission rates from a tracer experiment in a modern cow shed hosting 43 dairy cows in their accustomed environment. During a 24-hour period the concentrations of CH4, CO2, and SF6, a trace gas which has been released at a constant rate into the stable air, have been measured. The ratio between SF6 release rate and measured SF6 concentration was then used to estimate the ventilation rate of the stable air during the course of the experiment. The respective ratio between CH4 or CO2 and SF6 concentration together with the known SF6 release rate allows us to calculate the CH4 (and CO2) emissions in the stable. From our experiment we derive a total daily mean CH4 emission of 441 LSTP per cow (9 cows nonlactating), which is about 15% higher than previous estimates for German cows with comparable milk production obtained during respiration chamber experiments. The higher emission in our stable experiment is attributed to the contribution of CH4 release from about 50 m3 of liquid manure present in the cow shed in underground channels. Also, considering measurements we made directly on a liquid manure tank, we obtained an estimate of the total CH4 production from manure: The normalized contribution of methane from manure amounts to 12-30% of the direct methane release of a dairy cow during rumination. The total CH4 release per dairy cow, including manure, is 521-530 LSTP CH4 per day.
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.
1991-01-01
The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.
Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohayai, Tanaz Angelina; Snopok, Pavel; Neuffer, David
The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.
Quantum state estimation when qubits are lost: a no-data-left-behind approach
Williams, Brian P.; Lougovski, Pavel
2017-04-06
We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.
The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment
NASA Technical Reports Server (NTRS)
Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.
1990-01-01
The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.
Does displayed enthusiasm favour recall, intrinsic motivation and time estimation?
Moè, Angelica
2016-11-01
Displayed enthusiasm has been shown to relate to intrinsic motivation, vitality, and positive affect, but its effects on recall performance and time estimation have not yet been explored. This research aimed at studying the effects of a delivery style characterised by High Enthusiasm (HE) on recall, time estimation, and intrinsic motivation. In line with previous studies, effects on intrinsic motivation were expected. In addition, higher recall and lower time estimations were hypothesised. In two experiments, participants assigned to a HE condition or to a normal reading control condition listened to a narrative and to a descriptive passage. Then, they were asked to rate perceived time, enthusiasm, pleasure, interest, enjoyment and curiosity, before writing a free recall. Experiment 1 showed that in the HE condition, participants recalled more, were more intrinsically motivated, and expressed lower time estimations compared to the control condition. Experiment 2 confirmed the positive effects of HE reading compared to normal reading, using different passages and a larger sample.
Independent Peer Evaluation of the Large Area Crop Inventory Experiment (LACIE): The LACIE Symposium
NASA Technical Reports Server (NTRS)
1978-01-01
Yield models and crop estimate accuracy are discussed within the Large Area Crop Inventory Experiment. The wheat yield estimates in the United States, Canada, and U.S.S.R. are emphasized. Experimental results design, system implementation, data processing systems, and applications were considered.
Learning to Detect Error in Movement Timing Using Physical and Observational Practice
ERIC Educational Resources Information Center
Black, Charles B.; Wright, David L.; Magnuson, Curt E.; Brueckner, Sebastian
2005-01-01
Three experiments assessed the possibility that a physical practice participant 's ability to render appropriate movement timing estimates may be hindered compared to those who merely observed. Results from these experiments revealed that observers and physical practice participants executed and estimated the overall durations of movement…
Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations
ERIC Educational Resources Information Center
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.
2016-01-01
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Myopia Glasses and Optical Power Estimation: An Easy Experiment
ERIC Educational Resources Information Center
Ribeiro, Jair Lúcio Prados
2015-01-01
Human eye optics is a common high school physics topic and students usually show a great interest during our presentation of this theme. In this article, we present an easy way to estimate a diverging lens' optical power from a simple experiment involving myopia eyeglasses and a smartphone flashlight.
Myopia Glasses and Optical Power Estimation: An Easy Experiment
NASA Astrophysics Data System (ADS)
Ribeiro, Jair Lúcio Prados
2015-02-01
Human eye optics is a common high school physics topic and students usually show a great interest during our presentation of this theme. In this article, we present an easy way to estimate a diverging lens' optical power from a simple experiment involving myopia eyeglasses and a smartphone flashlight.
Myers, Teresa A.; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents’ estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an “estimation and reveal” technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed. PMID:25812121
Myers, Teresa A; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents' estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an "estimation and reveal" technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed.
NASA Astrophysics Data System (ADS)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
Blowers, Paul; Hollingshead, Kyle
2009-05-21
In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.
O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.
2010-01-01
Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.
Large Area Crop Inventory Experiment (LACIE). Phase 1: Evaluation report
NASA Technical Reports Server (NTRS)
1976-01-01
It appears that the Large Area Crop Inventory Experiment over the Great Plains, can with a reasonable expectation, be a satisfactory component of a 90/90 production estimator. The area estimator produced more accurate area estimates for the total winter wheat region than for the mixed spring and winter wheat region of the northern Great Plains. The accuracy does appear to degrade somewhat in regions of marginal agriculture where there are small fields and abundant confusion crops. However, it would appear that these regions tend also to be marginal with respect to wheat production and thus increased area estimation errors do not greatly influence the overall production estimation accuracy in the United States. The loss of segments resulting from cloud cover appears to be a random phenomenon that introduces no significant bias into the estimates. This loss does increase the variance of the estimates.
Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.
2015-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio
2016-01-01
Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
The Analysis of Completely Randomized Factorial Experiments When Observations Are Lost at Random.
ERIC Educational Resources Information Center
Hummel, Thomas J.
An investigation was conducted of the characteristics of two estimation procedures and corresponding test statistics used in the analysis of completely randomized factorial experiments when observations are lost at random. For one estimator, contrast coefficients for cell means did not involve the cell frequencies. For the other, contrast…
Experience matters: neurologists' perspectives on ALS patients' well-being.
Aho-Özhan, Helena E A; Böhm, Sarah; Keller, Jürgen; Dorst, Johannes; Uttner, Ingo; Ludolph, Albert C; Lulé, Dorothée
2017-04-01
Despite the fatal outcome and progressive loss of physical functioning in amyotrophic lateral sclerosis (ALS), many patients maintain contentment in life. It has been shown that non-professionals tend to underestimate the well-being of patients with ALS, but professionals' perspective is yet to be studied. In total, 105 neurologists with varying degrees of experience with ALS were included in an anonymous survey. They were asked to estimate the quality of life and depressiveness of ALS patients with artificial ventilation and nutrition. Physicians' estimations were compared with previously reported subjective ratings of ALS patients with life-prolonging measures. Neurologists with significant experience on ALS and palliative care were able to accurately estimate depressiveness and quality of life of ALS patients with life-prolonging measures. Less experienced neurologists' estimation differed more from patients' reports. Of all life-prolonging measures neurologists regarded invasive ventilation as the measure associated with lowest quality of life and highest depressiveness of the patients. Experienced neurologists as well as neurologists with experience in palliative care are able to better empathize with patients with a fatal illness such as ALS and support important decision processes.
Surrogate utility estimation by long-term partners and unfamiliar dyads.
Tunney, Richard J; Ziegler, Fenja V
2015-01-01
To what extent are people able to make predictions about other people's preferences and values?We report two experiments that present a novel method assessing some of the basic processes in surrogate decision-making, namely surrogate-utility estimation. In each experiment participants formed dyads who were asked to assign utilities to health related items and commodity items, and to predict their partner's utility judgments for the same items. In experiment one we showed that older adults in long-term relationships were able to accurately predict their partner's wishes. In experiment two we showed that younger adults who were relatively unfamiliar with one another were also able to predict other people's wishes. Crucially we demonstrated that these judgments were accurate even after partialling out each participant's own preferences indicating that in order to make surrogate utility estimations people engage in perspective-taking rather than simple anchoring and adjustment, suggesting that utility estimation is not the cause of inaccuracy in surrogate decision-making. The data and implications are discussed with respect to theories of surrogate decision-making.
Tuo, Rui; Jeff Wu, C. F.
2016-07-19
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. The initial CAS estimates, which were made for each month from April through August, were considerably higher than the USDA/SRS estimates. This was attributed to: (1) the practice of considering bare ground as potential wheat and counting it as wheat; (2) overestimation of the wheat proportions in segments having only a small amount of wheat; and (3) the classification of confusion crops as wheat. At the end of the season most of the segments were reworked using improved methods based on experience gained during the season. In particular, new procedures were developed to solve the three problems listed above. These and other improvements used in the rework experiment resulted in at-harvest estimates that were much closer to the USDA/SRS estimates than those obtained during the regular season.
Vedenov, Dmitry; Alhotan, Rashed A; Wang, Runlian; Pesti, Gene M
2017-02-01
Nutritional requirements and responses of all organisms are estimated using various models representing the response to different dietary levels of the nutrient in question. To help nutritionists design experiments for estimating responses and requirements, we developed a simulation workbook using Microsoft Excel. The objective of the present study was to demonstrate the influence of different numbers of nutrient levels, ranges of nutrient levels and replications per nutrient level on the estimates of requirements based on common nutritional response models. The user provides estimates of the shape of the response curve, requirements and other parameters and observation to observation variation. The Excel workbook then produces 1-1000 randomly simulated responses based on the given response curve and estimates the standard errors of the requirement (and other parameters) from different models as an indication of the expected power of the experiment. Interpretations are based on the assumption that the smaller the standard error of the requirement, the more powerful the experiment. The user can see the potential effects of using one or more subjects, different nutrient levels, etc., on the expected outcome of future experiments. From a theoretical perspective, each organism should have some enzyme-catalysed reaction whose rate is limited by the availability of some limiting nutrient. The response to the limiting nutrient should therefore be similar to enzyme kinetics. In conclusion, the workbook eliminates some of the guesswork involved in designing experiments and determining the minimum number of subjects needed to achieve desired outcomes.
Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?
Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.
2010-01-01
This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540
Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi
2016-01-01
The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants. PMID:27304876
Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi
2016-01-01
The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.
Durgin, Frank H; Hajnal, Alen; Li, Zhi; Tonge, Natasha; Stigliani, Anthony
2010-06-01
Whereas most reports of the perception of outdoor hills demonstrate dramatic overestimation, estimates made by adjusting a palm board are much closer to the true hill orientation. We test the dominant hypothesis that palm board accuracy is related to the need for motor action to be accurately guided and conclude instead that the perceptual experience of palm-board orientation is biased and variable due to poorly calibrated proprioception of wrist flexion. Experiments 1 and 3 show that wrist-flexion palm boards grossly underestimate the orientations of near, reachable surfaces whereas gesturing with a free hand is fairly accurate. Experiment 2 shows that palm board estimates are much lower than free hand estimates for an outdoor hill as well. Experiments 4 shows that wrist flexion is biased and noisy compared to elbow flexion, while Experiment 5 shows that small changes in palm board height produce large changes in palm board estimates. Together, these studies suggest that palm boards are biased and insensitive measures. The existing literature arguing that there are two systems in the perception of geographical slant is re-evaluated, and a new theoretical framework is proposed in which a single exaggerated representation of ground-surface orientation guides both action and perception. Copyright 2010 Elsevier B.V. All rights reserved.
Terwilliger, Thomas C; Bunkóczi, Gábor; Hung, Li Wei; Zwart, Peter H; Smith, Janet L; Akey, David L; Adams, Paul D
2016-03-01
A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment are described. A simple theoretical framework [Terwilliger et al. (2016), Acta Cryst. D72, 346-358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. The phenix.plan_sad_experiment tool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimate the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. The phenix.scale_and_merge tool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and the phenix.anomalous_signal tool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.
Parameter identification of thermophilic anaerobic degradation of valerate.
Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini
2003-01-01
The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.
Assimilative modeling of low latitude ionosphere
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Wang, Chunining; Hajj, George A.; Rosen, I. Gary; Wilson, Brian D.; Mannucci, Anthony J.
2004-01-01
In this paper we present an observation system simulation experiment for modeling low-latitude ionosphere using a 3-dimensional (3-D) global assimilative ionospheric model (GAIM). The experiment is conducted to test the effectiveness of GAIM with a 4-D variational approach (4DVAR) in estimation of the ExB drift and thermospheric wind in the magnetic meridional planes simultaneously for all longitude or local time sectors. The operational Global Positioning System (GPS) satellites and the ground-based global GPS receiver network of the International GPS Service are used in the experiment as the data assimilation source. 'The optimization of the ionospheric state (electron density) modeling is performed through a nonlinear least-squares minimization process that adjusts the dynamical forces to reduce the difference between the modeled and observed slant total electron content in the entire modeled region. The present experiment for multiple force estimations reinforces our previous assessment made through single driver estimations conducted for the ExB drift only.
Mapping influenza transmission in the ferret model to transmission in humans
Buhnerkempe, Michael G; Gostic, Katelyn; Park, Miran; Ahsan, Prianna; Belser, Jessica A; Lloyd-Smith, James O
2015-01-01
The controversy surrounding 'gain-of-function' experiments on high-consequence avian influenza viruses has highlighted the role of ferret transmission experiments in studying the transmission potential of novel influenza strains. However, the mapping between influenza transmission in ferrets and in humans is unsubstantiated. We address this gap by compiling and analyzing 240 estimates of influenza transmission in ferrets and humans. We demonstrate that estimates of ferret secondary attack rate (SAR) explain 66% of the variation in human SAR estimates at the subtype level. Further analysis shows that ferret transmission experiments have potential to identify influenza viruses of concern for epidemic spread in humans, though small sample sizes and biological uncertainties prevent definitive classification of human transmissibility. Thus, ferret transmission experiments provide valid predictions of pandemic potential of novel influenza strains, though results should continue to be corroborated by targeted virological and epidemiological research. DOI: http://dx.doi.org/10.7554/eLife.07969.001 PMID:26329460
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Estimating Returns to Education Using Different Natural Experiment Techniques
ERIC Educational Resources Information Center
Leigh, Andrew; Ryan, Chris
2008-01-01
How much do returns to education differ across different natural experiment methods? To test this, we estimate the rate of return to schooling in Australia using two different instruments for schooling: month of birth and changes in compulsory schooling laws. With annual pre-tax income as our measure of income, we find that the naive ordinary…
2009-07-23
negative log of the probability at each edge. 135 7.4 Simulation experiments All simulation experiments were implemented in Matlab and executed on PCs...Sensitivity . . . . 71 4.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.5.1 Simulation Results...113 6.6.2 Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.7 Simulation Results
USDA-ARS?s Scientific Manuscript database
We develop a robust understanding of the effects of assimilating remote sensing observations of leaf area index and soil moisture (in the top 5 cm) on DSSAT-CSM CropSim-Ceres wheat yield estimates. Synthetic observing system simulation experiments compare the abilities of the Ensemble Kalman Filter...
NASA Technical Reports Server (NTRS)
1980-01-01
A plan is presented for a supplemental experiment to evaluate a sample allocation technique for selecting picture elements from remotely sensed multispectral imagery for labeling in connection with a new crop proportion estimation technique. The method of evaluating an improved allocation and proportion estimation technique is also provided.
Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment
NASA Technical Reports Server (NTRS)
Dixon, T. H.; Wolf, S. Kornreich
1990-01-01
Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-10-16
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.
NASA Technical Reports Server (NTRS)
Howe, John T.
1991-01-01
Thermochemical relaxation distances behind the strong normal shock waves associated with vehicles that enter the Earth atmosphere upon returning from a manned lunar or Mars mission are estimated. The relaxation distances for a Mars entry are estimated as well, in order to highlight the extent of the relaxation phenomena early in currently envisioned space exploration studies. The thermochemical relaxation length for the Aeroassist Flight Experiment is also considered. These estimates provide an indication as to whether finite relaxation needs to be considered in subsequent detailed analyses. For the Mars entry, relaxation phenomena that are fully coupled to the flow field equations are used. The relaxation-distance estimates can be scaled to flight conditions other than those discussed.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Developmental and Individual Differences in Pure Numerical Estimation
ERIC Educational Resources Information Center
Booth, Julie L.; Siegler, Robert S.
2006-01-01
The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…
Virtual parameter-estimation experiments in Bioprocess-Engineering education.
Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes
2006-05-01
Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Developmental and individual differences in pure numerical estimation.
Booth, Julie L; Siegler, Robert S
2006-01-01
The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
NASA Technical Reports Server (NTRS)
Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.
1972-01-01
A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Probabilistic segmentation and intensity estimation for microarray images.
Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro
2006-01-01
We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.
Observations of HF backscatter decay rates from HAARP generated FAI
NASA Astrophysics Data System (ADS)
Bristow, William; Hysell, David
2016-07-01
Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996.
Observations of HF backscatter decay rates from HAARP generated FAI
NASA Astrophysics Data System (ADS)
Bristow, W. A.; Hysell, D. L.
2016-12-01
Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996.
Fisher, S
1985-01-01
Three experiments are reported in which expectancies about performance in stressful conditions by nondepressed and depressed nonclinical populations were examined. The first experiment was concerned with estimates of either errors or response rates made in advance, with regard to the likely competence level of a (hypothetical) person allegedly working in conditions of either loud noise, fatigue, sleep loss, social stress, or incentive. Nondepressed subjects as well as depressed subjects provided negative expectancies. The second experiment involved obtaining an estimate of personal competence in conditions where subjects were instructed that personal performance on the task would be required after the estimate had been provided. Nondepressed subjects differed from depressed subjects in that the estimates of the former were less negative in terms of the magnitude of the estimates provided. A third experiment was designed to see whether the negative expectancies about performance in stress exhibited both by nondepressed and by depressed subjects would be used in making allowances for the competence of a typist on the basis of a typescript allegedly produced under high noise conditions. An unexpected effect was that depressed subjects judged the typist more harshly and failed to make allowance for adverse working conditions in the way that nondepressed subjects did. The results are discussed in terms of the implications for understanding cognitive factors in depression.
Statistical inference from multiple iTRAQ experiments without using common reference standards.
Herbrich, Shelley M; Cole, Robert N; West, Keith P; Schulze, Kerry; Yager, James D; Groopman, John D; Christian, Parul; Wu, Lee; O'Meally, Robert N; May, Damon H; McIntosh, Martin W; Ruczinski, Ingo
2013-02-01
Isobaric tags for relative and absolute quantitation (iTRAQ) is a prominent mass spectrometry technology for protein identification and quantification that is capable of analyzing multiple samples in a single experiment. Frequently, iTRAQ experiments are carried out using an aliquot from a pool of all samples, or "masterpool", in one of the channels as a reference sample standard to estimate protein relative abundances in the biological samples and to combine abundance estimates from multiple experiments. In this manuscript, we show that using a masterpool is counterproductive. We obtain more precise estimates of protein relative abundance by using the available biological data instead of the masterpool and do not need to occupy a channel that could otherwise be used for another biological sample. In addition, we introduce a simple statistical method to associate proteomic data from multiple iTRAQ experiments with a numeric response and show that this approach is more powerful than the conventionally employed masterpool-based approach. We illustrate our methods using data from four replicate iTRAQ experiments on aliquots of the same pool of plasma samples and from a 406-sample project designed to identify plasma proteins that covary with nutrient concentrations in chronically undernourished children from South Asia.
Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk
2018-01-01
Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.
Leidenfrost Point and Estimate of the Vapour Layer Thickness
ERIC Educational Resources Information Center
Gianino, Concetto
2008-01-01
In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the…
Using respondent uncertainty to mitigate hypothetical bias in a stated choice experiment
Richard C. Ready; Patricia A. Champ; Jennifer L. Lawton
2010-01-01
In a choice experiment study, willingness to pay for a public good estimated from hypothetical choices was three times as large as willingness to pay estimated from choices requiring actual payment. This hypothetical bias was related to the stated level of certainty of respondents. We develop protocols to measure respondent certainty in the context of a choice...
ERIC Educational Resources Information Center
Herek, Gregory M.
2009-01-01
Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or…
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cochran, R.C.
1985-01-01
Procedures used in estimating ruminal particle turnover and diet digestibility were evaluated in a series of independent experiments. Experiment 1 and 2 evaluated the influence of sampling site, mathematical model and intraruminal mixing on estimates of ruminal particle turnover in beef steers grazing crested wheatgrass or offered ad libitum levels of prairie hay once daily, respectively. Particle turnover rate constants were estimated by intraruminal administration (via rumen cannula) of ytterbium (Yb)-labeled forage, followed by serial collection of rumen digesta or fecal samples. Rumen Yb concentrations were transformed to natural logarithms and regressed on time. Influence of sampling site (rectum versusmore » rumen) on turnover estimates was modified by the model used to fit fecal marker excretion curves in the grazing study. In contrast, estimated turnover rate constants from rumen sampling were smaller (P < 0.05) than rectally derived rate constants, regardless of fecal model used, when steers were fed once daily. In Experiment 3, in vitro residues subjected to acid or neutral detergent fiber extraction (IVADF and IVNDF), acid detergent fiber incubated in cellulase (ADFIC) and acid detergent lignin (ADL) were evaluated as internal markers for predicting diet digestibility. Both IVADF and IVNDF displayed variable accuracy for prediction of in vivo digestibility whereas ADL and ADFIC inaccurately predicted digestibility of all diets.« less
Ward, B Douglas; Mazaheri, Yousef
2006-12-15
The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.
NASA Astrophysics Data System (ADS)
Hamaguchi, Nana; Yamamoto, Keiko; Iwai, Daisuke; Sato, Kosuke
We investigate ambient sensing techniques that recognize writer's psychological states by measuring vibrations of handwriting on a desk panel using a piezoelectric contact sensor attached to its underside. In particular, we describe a technique for estimating the subjective difficulty of a question for a student as the ratio of the time duration of thinking to the total amount of time spent on the question. Through experiments, we confirm that our technique correctly recognizes whether or not a person writes something down on paper by measured vibration data at the accuracy of over 80 %, and that the order of computed subjective difficulties of three questions is coincident with that reported by the subject in 60 % of experiments. We also propose a technique to estimate a writer's psychological stress by using the standard deviation of the spectrum of the measured vibration. Results of a proof-of-concept experiment show that the proposed technique correctly estimates whether or not the subject feels stress at least 90 % of the time.
Diminishing Adult Egocentrism when Estimating What Others Know
ERIC Educational Resources Information Center
Thomas, Ruthann C.; Jacoby, Larry L.
2013-01-01
People often use what they know as a basis to estimate what others know. This egocentrism can bias their estimates of others' knowledge. In 2 experiments, we examined whether people can diminish egocentrism when predicting for others. Participants answered general knowledge questions and then estimated how many of their peers would know the…
The link between judgments of comparative risk and own risk: further evidence.
Gold, Ron S
2007-03-01
Individuals typically believe that they are less likely than the average person to experience negative events, a phenomenon termed "unrealistic optimism". The direct method of assessing unrealistic optimism employs a question of the form, "Compared with the average person, what is the chance that X will occur to you?". However, it has been proposed that responses to such a question (direct-estimates) are based essentially just on estimates that X will occur to the self (self-estimates). If this is so, any factors that affect one of these estimates should also affect the other. This prediction was tested in two experiments. In each, direct- and self-estimates for an unfamiliar health threat - homocysteine-related heart problems - were recorded. It was found that both types of estimate were affected in the same way by varying the stated probability of having unsafe levels of homocysteine (Study 1, N=149) and varying the stated probability that unsafe levels of homocysteine will lead to heart problems (Study 2, N=111). The results are consistent with the proposal that direct-estimates are constructed just from self-estimates.
Does Flattened Sky Dome Reduces Perceived Moon Size
NASA Astrophysics Data System (ADS)
Toskovic, O.
2009-09-01
The aim of this study was to examine the Flattened sky dome model as an explanation of the Moon illusion. Two experiments were done, in a dark room, in which distribution of depth cues is the same towards horizon as towards zenith. In the first experiment 14 participants had the task to equalize the perceived distances of three stimuli in three directions (horizontal, tilted 45 degrees and vertical). In the second experiment 16 participants had the task to estimate the perceived sizes of three stimuli in the same three directions. For distance estimates we found differences among three directions in a way, that as the head tilts upwards, the perceived space is being elongated, which is the opposite to flattened sky dome. For size estimates we found no difference among the three directions.
In situ diffusion experiment in granite: Phase I
NASA Astrophysics Data System (ADS)
Vilks, P.; Cramer, J. J.; Jensen, M.; Miller, N. H.; Miller, H. G.; Stanchell, F. W.
2003-03-01
A program of in situ experiments, supported by laboratory studies, was initiated to study diffusion in sparsely fractured rock (SFR), with a goal of developing an understanding of diffusion processes within intact crystalline rock. Phase I of the in situ diffusion experiment was started in 1996, with the purpose of developing a methodology for estimating diffusion parameter values. Four in situ diffusion experiments, using a conservative iodide tracer, were performed in highly stressed SFR at a depth of 450 m in the Underground Research Laboratory (URL). The experiments, performed over a 2 year period, yielded rock permeability estimates of 2×10 -21 m 2 and effective diffusion coefficients varying from 2.1×10 -14 to 1.9×10 -13 m 2/s, which were estimated using the MOTIF code. The in situ diffusion profiles reveal a characteristic "dog leg" pattern, with iodide concentrations decreasing rapidly within a centimeter of the open borehole wall. It is hypothesized that this is an artifact of local stress redistribution and creation of a zone of increased constrictivity close to the borehole wall. A comparison of estimated in situ and laboratory diffusivities and permeabilities provides evidence that the physical properties of rock samples removed from high-stress regimes change. As a result of the lessons learnt during Phase I, a Phase II in situ program has been initiated to improve our general understanding of diffusion in SFR.
Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei; Zwart, Peter H.; Smith, Janet L.; Akey, David L.; Adams, Paul D.
2016-01-01
A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment are described. A simple theoretical framework [Terwilliger et al. (2016 ▸), Acta Cryst. D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. The phenix.plan_sad_experiment tool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimate the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. The phenix.scale_and_merge tool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and the phenix.anomalous_signal tool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing. PMID:26960123
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2011 CFR
2011-01-01
... purpose of the experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2014 CFR
2014-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2012 CFR
2012-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2013 CFR
2013-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
DOE Office of Scientific and Technical Information (OSTI.GOV)
den Hollander, J.A.; Ugurbil, K.; Brown, T.R.
Glucose metabolism was followed in suspensions of Saccharomyces cerevisiae by using 13C NMR and 14C radioactive labeling techniques and by Warburg manometer experiments. These experiments were performed for cells grown with various carbon sources in the growth medium, so as to evaluate the effect of catabolite repression. The rate of glucose utilization was most conveniently determined by the 13C NMR experiments, which measured the concentration of (1-13C)glucose, whereas the distribution of end products was determined from the 13C and the 14C experiments. By combining these measurements the flows into the various pathways that contribute to glucose catabolism were estimated, andmore » the effect of oxygen upon glucose catabolism was evaluated. From these measurements, the Pasteur quotient (PQ) for glucose catabolism was calculated to be 2.95 for acetate-grown cells and 1.89 for cells grown on glucose into saturation. The Warburg experiments provided an independent estimate of glucose catabolism. The PQ estimated from Warburg experiments was 2.9 for acetate-grown cells in excellent agreement with the labeled carbon experiments and 4.6 for cells grown into saturation, which did not agree. Possible explanations of these differences are discussed. From these data an estimate is obtained of the net flow through the Embden-Meyerhof-Parnas pathway. The backward flow through fructose-1,6-bisphosphatase (Fru-1,6-P2-ase) was calculated from the scrambling of the 13C label of (1-13C)glucose into the C1 and C6 positions of trehalose. Combining these data allowed us to calculate the net flux through phosphofructokinase (PFK). For acetate-grown cells we found that the relative flow through PFK is a factor of 1.7 faster anaerobically than aerobically.« less
Objective assessment of operator performance during ultrasound-guided procedures.
Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R
2011-09-01
Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.
Giovannelli, Justin; Curran, Emily
2017-02-01
Issue: Policymakers have sought to improve the shopping experience on the Affordable Care Act’s marketplaces by offering decision support tools that help consumers better understand and compare their health plan options. Cost estimators are one such tool. They are designed to provide consumers a personalized estimate of the total cost--premium, minus subsidy, plus cost-sharing--of their coverage options. Cost estimators were available in most states by the start of the fourth open enrollment period. Goal: To understand the experiences of marketplaces that offer a total cost estimator and the interests and concerns of policymakers from states that are not using them. Methods: Structured interviews with marketplace officials, consumer enrollment assisters, technology vendors, and subject matter experts; analysis of the total cost estimators available on the marketplaces as of October 2016. Key findings and conclusions: Informants strongly supported marketplace adoption of a total cost estimator. Marketplaces that offer an estimator faced a range of design choices and varied significantly in their approaches to resolving them. Interviews suggested a clear need for additional consumer testing and data analysis of tool usage and for sustained outreach to enrollment assisters to encourage greater use of the estimators.
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments
2010-01-01
Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.
Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin
2010-05-18
The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.
ERIC Educational Resources Information Center
Shin, Hye Sook
2009-01-01
Using data from a nationwide, large-scale experimental study of the effects of a connected classroom technology on student learning in algebra (Owens et al., 2004), this dissertation focuses on challenges that can arise in estimating treatment effects in educational field experiments when samples are highly heterogeneous in terms of various…
Exploiting Non-sequence Data in Dynamic Model Learning
2013-10-01
For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in
NASA Astrophysics Data System (ADS)
Smallwood, John R.
2018-01-01
Charles Hutton suggested in 1821 that the pyramids of Egypt be used to site an experiment to measure the deflection of the vertical by a large mass. The suggestion arose as he had estimated the attraction of a Scottish mountain as part of Nevil Maskelyne's (1774) "Schiehallion Experiment", a demonstration of Isaac Newton's law of gravitational attraction and the earliest reasonable quantitative estimate of Earth's mean density. I present a virtual realization of an experiment at the Giza pyramids to investigate how Hutton's concept might have emerged had it been undertaken as he suggested. The attraction of the Great Pyramid would have led to inward north-south deflections of the vertical totalling 1.8 arcsec (0.0005°), and east-west deflections totalling 2.0 arcsec (0.0006°), which although small, would have been within the contemporaneous detectable range, and potentially given, as Hutton wished, a more accurate Earth density measurement than he reported from the Schiehallion experiment.
Towards reliable ET estimates in the semi-arid Júcar region in Spain.
NASA Astrophysics Data System (ADS)
Brenner, Johannes; Zink, Matthias; Schrön, Martin; Thober, Stephan; Rakovec, Oldrich; Cuntz, Matthias; Merz, Ralf; Samaniego, Luis
2017-04-01
Current research indicated the potential for improving evapotranspiration (ET) estimates in state-of-the-art hydrologic models such as the mesoscale Hydrological Model (mHM, www.ufz.de/mhm). Most models exhibit deficiencies to estimate the ET flux in semi-arid regions. Possible reasons for poor performance may be related to the low resolution of the forcings, the estimation of the PET, which is in most cases based on temperature only, the joint estimation of the transpiration and evaporation through the Feddes equation, poor process parameterizations, among others. In this study, we aim at sequential hypothesis-based experiments to uncover the main reasons of these deficiencies at the Júcar basin in Spain. We plan the following experiments: 1) Use the high resolution meteorological forcing (P and T) provided by local authorities to estimate its effects on ET and streamflow. 2) Use local ET measurements at seven eddy covariance stations to estimate evaporation related parameters. 3) Test the influence of the PET formulations (Hargreaves-Samani, Priestley-Taylor, Penman-Montheith). 4) Estimate evaporation and transpiration separately based on equations proposed by Bohn and Vivoni (2016) 5) Incorporate local soil moisture measurements to re-estimate ET and soil moisture related parameters. We set-up mHM for seven eddy-covariance sites at the local scale (100 × 100 m2). This resolution was chosen because it is representative for the footprint of the latent heat estimation at the eddy-covariance station. In the second experiment, for example, a parameter set is to be found as a compromised solution between ET measured at local stations and the streamflow observations at eight sub-basins of the Júcar river. Preliminary results indicate that higher model performance regarding streamflow can be achieved using local high-resolution meteorology. ET performance is, however, still deficient. On the contrary, using ET site calibrations alone increase performance in ET but yields in poor performance in streamflow. Results suggest the need of multi-variable, simultaneous calibration schemes to reliable estimate ET and streamflow in the Júcar basin. Penman-Montheith appears to be the best performing PET formulation. Experiments 4 and 5 should reveal the benefits of separating evaporation from bare soil and transpiration in semi-arid regions using mHM. Further research in this direction is foreseen by incorporating neutron counts from Cosmic Ray Neutron Sensing technology in the calibration/validation procedure of mHM.
Improving chemical species tomography of turbulent flows using covariance estimation.
Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J
2017-05-01
Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.
Shear wave speed estimation by adaptive random sample consensus method.
Lin, Haoming; Wang, Tianfu; Chen, Siping
2014-01-01
This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.
Sensorless position estimator applied to nonlinear IPMC model
NASA Astrophysics Data System (ADS)
Bernat, Jakub; Kolota, Jakub
2016-11-01
This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.
NASA Astrophysics Data System (ADS)
Suzuki, Yuki; Fung, George S. K.; Shen, Zeyang; Otake, Yoshito; Lee, Okkyun; Ciuffo, Luisa; Ashikaga, Hiroshi; Sato, Yoshinobu; Taguchi, Katsuyuki
2017-03-01
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
A mass-density model can account for the size-weight illusion.
Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.
MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851
Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica
2017-01-01
Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives' names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives' names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives' names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives' names, we find that: a) Given the treatment of alternatives' names in the presentation phase, the treatment of alternatives' names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives' names in the design phase, the labeled treatment of alternatives' names in the presentation phase causes the corresponding WTP estimates to be slightly higher.
Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica
2017-01-01
Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives’ names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives’ names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives’ names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives’ names, we find that: a) Given the treatment of alternatives’ names in the presentation phase, the treatment of alternatives’ names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives’ names in the design phase, the labeled treatment of alternatives’ names in the presentation phase causes the corresponding WTP estimates to be slightly higher. PMID:28806764
Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor
Tanno, Koichi
2017-01-01
A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG) signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor. PMID:28912800
Sensitivity and systematics of calorimetric neutrino mass experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucciotti, A.; Cremonesi, O.; Ferri, E.
2009-12-16
A large calorimetric neutrino mass experiment using thermal detectors is expected to play a crucial role in the challenge for directly assessing the neutrino mass. We discuss and compare here two approaches for the estimation of the experimental sensitivity of such an experiment. The first method uses an analytic formulation and allows to obtain readily a close estimate over a wide range of experimental configurations. The second method is based on a Montecarlo technique and is more precise and reliable. The Montecarlo approach is then exploited to study some sources of systematic uncertainties peculiar to calorimetric experiments. Finally, the toolsmore » are applied to investigate the optimal experimental configuration of the MARE project.« less
Varieties of quantity estimation in children.
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-06-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3 children were asked to map continuous quantities, discrete nonsymbolic quantities (numerosities), and symbolic (Arabic) numbers onto a visual line. Numerical quantity was matched for the symbolic and discrete nonsymbolic conditions, whereas cumulative surface area was matched for the continuous and discrete quantity conditions. Crucially, in the discrete condition children's estimation could rely either on the cumulative area or numerosity. All children showed a linear mapping for continuous quantities, whereas a developmental shift from a logarithmic to a linear mapping was observed for both nonsymbolic and symbolic numerical quantities. Analyses on individual estimates suggested the presence of two distinct strategies in estimating discrete nonsymbolic quantities: one based on numerosity and the other based on spatial extent. In Experiment 2, a non-spatial continuous quantity (shades of gray) and new discrete nonsymbolic conditions were added to the set used in Experiment 1. Results confirmed the linear patterns for the continuous tasks, as well as the presence of a subset of children relying on numerosity for the discrete nonsymbolic numerosity conditions despite the availability of continuous visual cues. Overall, our findings demonstrate that estimation of numerical and non-numerical quantities is based on different processing strategies and follow different developmental trajectories. (c) 2015 APA, all rights reserved).
Mean size estimation yields left-side bias: Role of attention on perceptual averaging.
Li, Kuei-An; Yeh, Su-Ling
2017-11-01
The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.
Contractor Accounting, Reporting and Estimating (CARE).
Contractor Accounting Reporting and Estimating (CARE) provides check lists that may be used as guides in evaluating the accounting system, financial reporting , and cost estimating capabilities of the contractor. Experience gained from the Management Review Technique was used as a basis for the check lists. (Author)
UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Jannati, Ali; McDonald, John J; Di Lollo, Vincent
2015-06-01
The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).
Improved localisation of neoclassical tearing modes by combining multiple diagnostic estimates
NASA Astrophysics Data System (ADS)
Rapson, C. J.; Fischer, R.; Giannone, L.; Maraschek, M.; Reich, M.; Treutterer, W.; The ASDEX Upgrade Team
2017-07-01
Neoclassical tearing modes (NTMs) strongly degrade confinement in tokamaks, and are a leading cause of disruptions. They can be stabilised by targeted electron cyclotron current drive (ECCD), however the effectiveness of ECCD depends strongly on the accuracy or misalignment between ECCD and the NTM. The first step to ensure minimal misalignment is a good estimate of the NTM location. In previous NTM control experiments, three methods have been used independently to estimate the NTM location: the magnetic equilibrium, correlation between magnetic and spatially-resolved temperature fluctuations, and the amplitude response of the NTM to nearby ECCD. This submission describes an algorithm which has been designed to fuse these three estimates into one, taking into account many of the characteristics of each diagnostic. Although the method diverges from standard data fusion methods, results from simulation and experiment confirm that the algorithm achieves its stated goal of providing an estimate that is more reliable and accurate than any of the individual estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SENUM,G.I.; DIETZ,R.N.
2004-06-30
Recent studies demonstrate the impact of fugitive emissions of reactive alkenes on the atmospheric chemistry of the Houston Texas metropolitan area (1). Petrochemical plants located in and around the Houston area emit atmospheric alkenes, such as ethene, propene and 1,3-butadiene. The magnitude of emissions is a major uncertainty in assessing their effects. Even though the petrochemical industry reports that fugitive emissions of alkenes have been reduced to less than 0.1% of daily production, recent measurement data, obtained during the TexAQS 2000 experiment indicates that emissions are perhaps a factor of ten larger than estimated values. Industry figures for fugitive emissionsmore » are based on adding up estimated emission factors for every component in the plant to give a total estimated emission from the entire facility. The dramatic difference between estimated and measured rates indicates either that calculating emission fluxes by summing estimates for individual components is seriously flawed, possibly due to individual components leaking well beyond their estimated tolerances, that not all sources of emissions for a facility are being considered in emissions estimates, or that there are known sources of emissions that are not being reported. This experiment was designed to confirm estimates of reactive alkene emissions derived from analysis of the TexAQS 2000 data by releasing perfluorocarbon tracers (PFTs) at a known flux from a petrochemical plant and sampling both the perfluorocarbon tracer and reactive alkenes downwind using the Piper-Aztec research aircraft operated by Baylor University. PFTs have been extensively used to determine leaks in pipelines, air infiltration in buildings, and to characterize the transport and dispersion of air parcels in the atmosphere. Over 20 years of development by the Tracer Technology Center (TTC) has produced a range of analysis instruments, field samplers and PFT release equipment that have been successfully deployed in a large variety of experiments. PFTs are inert, nontoxic, noncombustible and nonreactive. Up to seven unique PFTs can be simultaneously released, sampled and analyzed and the technology is well suited for determining emission fluxes from large petrochemical facilities. The PFT experiment described here was designed to quantitate alkene emissions from a single petrochemical facility, but such experiments could be applied to other industrial sources or groups of sources in the Houston area.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Jeff Wu, C. F.
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
What influences midwives in estimating labour pain?
Williams, A C de C; Morris, J; Stevens, K; Gessler, S; Cella, M; Baxter, J
2013-01-01
Clinicians' estimates of patients' pain are frequently used as a basis for delivering care, and the characteristics of the clinician and of the patient influence this estimate. We studied pain estimation by midwives attending women in uncomplicated labour. Sixty-six practising midwives of varied age, ethnicity and professional experience were asked to complete a trait empathy measure and then to estimate the maximum pain and anxiety experienced by six women whose filmed labour contractions they viewed. Additionally, they rated similarity to the labouring women in ethnicity, and described their beliefs about pain expression according to ethnicity. Midwife estimates of pain and anxiety were highly correlated. Longer professional experience was associated with lower pain estimates, while more births to the midwife herself was associated with higher pain estimates. A multiple regression model identified number of births to the midwife herself, and two components of empathy (perspective taking and identification), to be important in predicting midwife pain estimates for women in labour. Midwives expressed clear beliefs about women's expression of pain during labour according to ethnicity, but these beliefs were not consistent across midwives, even between midwives of similar ethnicity. Midwives' personal characteristics can bias the estimation of pain in woman in labour and therefore influence treatment. © 2012 European Federation of International Association for the Study of Pain Chapters.
NASA Astrophysics Data System (ADS)
Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew
2008-10-01
The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.
How far can we go in hydrological modelling without any knowledge of runoff formation processes?
NASA Astrophysics Data System (ADS)
Ayzel, Georgy
2016-04-01
Hydrological modelling is a challenging scientific issue for the last 50 years and tend to be it further because of the highest level of runoff formation processes complexity at the different spatio-temporal scales. Enormous number of modelling-related papers have submitted to the top-ranked journals every year, but in this publication speed race authors have pay increasing attention to the models and data they use by itself rather than underlying watershed processes. Great community effort of the free and open-source models sharing with high availability of hydrometeorological data sources led to conceptual shifting paradigm of hydrological science to the technical-oriented direction. In the third-world countries this shifting is more clear by the reason of field studies absence and obligatory requirement of practical significance of the research supported by the government funds. As a result we get a state of hydrological modelling discipline closer to the aim of high Nash-Sutcliffe efficiency (NSE) achievement rather than watershed processes understanding. Both lumped physically-based land-surface model SWAP (Soil Water - Atmosphere - Plants) and SCE-UA (Shuffled Complex Evolution method developed at The University of Arizona) technique for robust model parameters search were used for the runoff modelling of 323 MOPEX watersheds. No one special data analysis and expert knowledge-based decisions were not performed. Median value of NSE is 0.652 and 90% of watersheds have efficiency bigger than 0.5. Thus without any information of particular features of each watershed satisfactory modelling results were obtained. To prove our conclusions we build cutting-edge conceptual rainfall-runoff model based on decision trees and adaptive boosting machine learning algorithms for the one small watershed in USA. No one special data analysis or feature engineering was not performed too. Obtained results demonstrate great model prediction power both for learning and testing periods (NSE > 0.95). The way we obtain our results is clear and direct: we used both open-source physically based and conceptual models coupled with open access data. However these results does not make a significant contribution to the hydrological cycle processes understanding. And not the hydrological modelling itself but the reason why and for what we do it is the most challenging issue for the future research.
Kundu, S; Kuehnle, E; Schippert, C; von Ehr, J; Hillemanns, P; Staboulidou, Ismini
2017-11-01
The aim of this study was to analyze whether the umbilical artery pH value can be estimated throughout CTG assessment 60 min prior to delivery and if the estimated umbilical artery pH value correlates with the actual one. This includes analysis of correlation between CTG trace classification and actual umbilical artery pH value. Intra-and interobserver agreement and the impact of professional experience on visual analysis of fetal heart rate tracing were evaluated. This was a retrospective study. 300 CTG records of the last 60 min before delivery were picked randomly from the computer database with the following inclusion criteria; singleton pregnancy >37 weeks, no fetal anomalies, vaginal delivery either spontaneous or instrumental-assisted. Five obstetricians and two midwives of different professional experience classified 300 CTG traces according to the FIGO criteria and estimated the postnatal umbilical artery pH. The results showed a significant difference (p < 0.05) in estimated and actual pH value, independent of professional experience. Analysis and correlation of CTG assessment and actual umbilical artery pH value showed significantly (p < 0.05) diverging results. Intra- and interobserver variability was high. Intraobserver variability was significantly higher for the resident (p = 0.001). No significant differences were detected regarding interobserver variability. An estimation of the pH value and consequently of neonatal outcome on the basis of a present CTG seems to be difficult. Therefore, not only CTG training but also clinical experience and the collaboration and consultation within the whole team is important.
ERIC Educational Resources Information Center
Thompson, Clarissa A.; Opfer, John E.
2010-01-01
How does understanding the decimal system change with age and experience? Second, third, sixth graders, and adults (Experiment 1: N = 96, mean ages = 7.9, 9.23, 12.06, and 19.96 years, respectively) made number line estimates across 3 scales (0-1,000, 0-10,000, and 0-100,000). Generation of linear estimates increased with age but decreased with…
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Empirical Evidence on Occupation and Industry Specific Human Capital
Sullivan, Paul
2009-01-01
This paper presents instrumental variables estimates of the effects of firm tenure, occupation specific work experience, industry specific work experience, and general work experience on wages using data from the 1979 Cohort of the National Longitudinal Survey of Youth. The estimates indicate that both occupation and industry specific human capital are key determinants of wages, and the importance of various types of human capital varies widely across one-digit occupations. Human capital is primarily occupation specific in occupations such as craftsmen, where workers realize a 14% increase in wages after five years of occupation specific experience but do not realize wage gains from industry specific experience. In contrast, human capital is primarily industry specific in other occupations such as managerial employment where workers realize a 23% wage increase after five years of industry specific work experience. In other occupations, such as professional employment, both occupation and industry specific human capital are key determinants of wages. PMID:20526448
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.
2017-04-01
The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.
Fiscal year 1981 US corn and soybeans pilot preliminary experiment plan, phase 1
NASA Technical Reports Server (NTRS)
Livingston, G. P.; Nedelman, K. S.; Norwood, D. F.; Smith, J. H. (Principal Investigator)
1981-01-01
A draft of the preliminary experiment plan for the foreign commodity production forecasting project fiscal year 1981 is presented. This draft plan includes: definition of the phase 1 and 2 U.S. pilot objectives; the proposed experiment design to evaluate crop calendar, area estimation, and area aggregation components for corn and soybean technologies using 1978/1979 crop-year data; a description of individual sensitivity evaluations of the baseline corn and soybean segment classification procedure; and technology and data assessment in support of the corn and soybean estimation technology for use in the U.S. central corn belt.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments
Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia
2016-01-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710
Caçola, Priscila; Ibana, Melvin; Ricard, Mark; Gabbard, Carl
2016-01-01
Coincident timing or interception ability can be defined as the capacity to precisely time sensory input and motor output. This study compared accuracy of typically developing (TD) children and those with Developmental Coordination Disorder (DCD) on a task involving estimation of coincident timing with their arm and various tool lengths. Forty-eight (48) participants performed two experiments where they imagined intercepting a target moving toward (Experiment 1) and target moving away (Experiment 2) from them in 5 conditions with their arm and tool lengths: arm, 10, 20, 30, and 40 cm. In Experiment 1, the DCD group overestimated interception points approximately twice as much as the TD group, and both groups overestimated consistently regardless of the tool used. Results for Experiment 2 revealed that those with DCD underestimated about three times as much as the TD group, with the exception of when no tool was used. Overall, these results indicate that children with DCD are less accurate with estimation of coincident-timing; which might in part explain their difficulties with common motor activities such as catching a ball or striking a baseball pitch. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A
2014-11-15
Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for estimating the entire set of heat transfer parameters and their spatial distribution by inverse modeling. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He
2016-11-20
Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4 rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.
The demand for physician services. Evidence from a natural experiment.
Cockx, Bart; Brasseur, Carine
2003-11-01
This study exploits a natural experiment in Belgium to estimate the effect of copayment increases on the demand for physician services. It shows how a differences-in-differences (DD) estimator of the price effects can be decomposed into effects induced by the common average proportional price increase (income effects) and by the change in relative prices (substitution effects). The price elasticity of a uniform proportional price increase is relatively small (-0.13 for men and -0.03 for women). Substitution effects are large, especially for women, but imprecisely estimated. Despite the substantial price increases, the efficiency gain of the reform, if any, is modest.
Estimating Coherence Measures from Limited Experimental Data Available
NASA Astrophysics Data System (ADS)
Zhang, Da-Jian; Liu, C. L.; Yu, Xiao-Dong; Tong, D. M.
2018-04-01
Quantifying coherence has received increasing attention, and considerable work has been directed towards finding coherence measures. While various coherence measures have been proposed in theory, an important issue following is how to estimate these coherence measures in experiments. This is a challenging task, since the state of a system is often unknown in practical applications and the accessible measurements in a real experiment are typically limited. In this Letter, we put forward an approach to estimate coherence measures of an unknown state from any limited experimental data available. Our approach is not only applicable to coherence measures but can be extended to other resource measures.
Collective Estimation: Accuracy, Expertise, and Extroversion as Sources of Intra-Group Influence
ERIC Educational Resources Information Center
Bonner, Bryan L.; Sillito, Sheli D.; Baumann, Michael R.
2007-01-01
Although estimations typically possess correct answers, these answers may be difficult to demonstrate to others. However, providing external information may increase their demonstrability. In this experiment, individuals (N = 60) and 6-person groups (N = 360) generated estimations with or without frames of reference. We hypothesized that…
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Varieties of Quantity Estimation in Children
ERIC Educational Resources Information Center
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-01-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…
A Technical Evaluation of the First Stage of the Mediterranean Regional Project.
ERIC Educational Resources Information Center
Hollister, Robinson
Objectives of this technical evaluation concerning the transfer of experience in the development of human resources were to develop educational plans based upon comprehensive estimates of manpower requirements and to evaluate the methods used in estimating manpower requirements for educational planning. The methodology involved estimates of the…
NASA Astrophysics Data System (ADS)
Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.
2015-12-01
Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.
Reconciling Estimates of Cell Proliferation from Stable Isotope Labeling Experiments
Drylewicz, Julia; Elemans, Marjet; Zhang, Yan; Kelly, Elizabeth; Reljic, Rajko; Tesselaar, Kiki; de Boer, Rob J.; Macallan, Derek C.; Borghans, José A. M.; Asquith, Becca
2015-01-01
Stable isotope labeling is the state of the art technique for in vivo quantification of lymphocyte kinetics in humans. It has been central to a number of seminal studies, particularly in the context of HIV-1 and leukemia. However, there is a significant discrepancy between lymphocyte proliferation rates estimated in different studies. Notably, deuterated 2H2-glucose (D2-glucose) labeling studies consistently yield higher estimates of proliferation than deuterated water (D2O) labeling studies. This hampers our understanding of immune function and undermines our confidence in this important technique. Whether these differences are caused by fundamental biochemical differences between the two compounds and/or by methodological differences in the studies is unknown. D2-glucose and D2O labeling experiments have never been performed by the same group under the same experimental conditions; consequently a direct comparison of these two techniques has not been possible. We sought to address this problem. We performed both in vitro and murine in vivo labeling experiments using identical protocols with both D2-glucose and D2O. This showed that intrinsic differences between the two compounds do not cause differences in the proliferation rate estimates, but that estimates made using D2-glucose in vivo were susceptible to difficulties in normalization due to highly variable blood glucose enrichment. Analysis of three published human studies made using D2-glucose and D2O confirmed this problem, particularly in the case of short term D2-glucose labeling. Correcting for these inaccuracies in normalization decreased proliferation rate estimates made using D2-glucose and slightly increased estimates made using D2O; thus bringing the estimates from the two methods significantly closer and highlighting the importance of reliable normalization when using this technique. PMID:26437372
Users manual for the US baseline corn and soybean segment classification procedure
NASA Technical Reports Server (NTRS)
Horvath, R.; Colwell, R. (Principal Investigator); Hay, C.; Metzler, M.; Mykolenko, O.; Odenweller, J.; Rice, D.
1981-01-01
A user's manual for the classification component of the FY-81 U.S. Corn and Soybean Pilot Experiment in the Foreign Commodity Production Forecasting Project of AgRISTARS is presented. This experiment is one of several major experiments in AgRISTARS designed to measure and advance the remote sensing technologies for cropland inventory. The classification procedure discussed is designed to produce segment proportion estimates for corn and soybeans in the U.S. Corn Belt (Iowa, Indiana, and Illinois) using LANDSAT data. The estimates are produced by an integrated Analyst/Machine procedure. The Analyst selects acquisitions, participates in stratification, and assigns crop labels to selected samples. In concert with the Analyst, the machine digitally preprocesses LANDSAT data to remove external effects, stratifies the data into field like units and into spectrally similar groups, statistically samples the data for Analyst labeling, and combines the labeled samples into a final estimate.
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Decker, Sandra L
2015-01-01
Objective To estimate the relationship between physicians' acceptance of new Medicaid patients and access to health care. Data Sources The National Ambulatory Medical Care Survey (NAMCS) Electronic Health Records Survey and the National Health Interview Survey (NHIS) 2011/2012. Study Design Linear probability models estimated the relationship between measures of experiences with physician availability among children on Medicaid or the Children's Health Insurance Program (CHIP) from the NHIS and state-level estimates of the percent of primary care physicians accepting new Medicaid patients from the NAMCS, controlling for other factors. Principal Findings Nearly 16 percent of children with a significant health condition or development delay had a doctor's office or clinic indicate that the child's health insurance was not accepted in states with less than 60 percent of physicians accepting new Medicaid patients, compared to less than 4 percent in states with at least 75 percent of physicians accepting new Medicaid patients. Adjusted estimates and estimates for other measures of access to care were similar. Conclusions Measures of experiences with physician availability for children on Medicaid/CHIP were generally good, though better in states where more primary care physicians accepted new Medicaid patients. PMID:25683869
NASA Astrophysics Data System (ADS)
Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.
2017-12-01
Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
M-X Environmental Technical Report. Economic Model.
1980-12-22
TABLE PAGE C-4. Estimated off base school facility costs. 147 C-5. Estimated development costs to other public facilities. 148 C-6. Estimated utility... development costs. 149 C-7. Estimated non-residential building development . 151 E-1. Adjustments to baseline population projections to account for...the fringes of the rural deployment areas themselves. These metropolitan areas potentially would experience significant indirect employment growth as a
A Generalized Distance’ Estimation Procedure for Intra-Urban Interaction
Bettinger . It is found that available estimation techniques necessarily result in non-integer solutions. A mathematical device is therefore...The estimation of urban and regional travel patterns has been a necessary part of current efforts to establish land use guidelines for the Texas...paper details computational experience with travel estimation within Corpus Christi, Texas, using a new convex programming approach of Charnes, Raike and
Itsy bitsy spider?: Valence and self-relevance predict size estimation.
Leibovich, Tali; Cohen, Noga; Henik, Avishai
2016-12-01
The current study explored the role of valence and self-relevance in size estimation of neutral and aversive animals. In Experiment 1, participants who were highly fearful of spiders and participants with low fear of spiders rated the size and unpleasantness of spiders and other neutral animals (birds and butterflies). We found that although individuals with both high and low fear of spiders rated spiders as highly unpleasant, only the highly fearful participants rated spiders as larger than butterflies. Experiment 2 included additional pictures of wasps (not self-relevant, but unpleasant) and beetles. The results of this experiment replicated those of Experiment 1 and showed a similar bias in size estimation for beetles, but not for wasps. Mediation analysis revealed that in the high-fear group both relevance and valence influenced perceived size, whereas in the low-fear group only valence affected perceived size. These findings suggest that the effect of highly relevant stimuli on size perception is both direct and mediated by valence. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda
2018-01-01
Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Tackling the African “poverty trap”: The Ijebu-Ode experiment
Mabogunje, Akin L.
2007-01-01
An experiment in poverty reduction began in 1998 in the city of Ijebu-Ode, Nigeria (estimated 1999 population 163,000), where, without the remittances from relatives abroad, an estimated 90% of the population lived below the poverty line of $1.00 (U.S.) per person per day. Central to the experiment was whether poverty can be dramatically reduced through a city consultation process that seeks to mobilize the entire community along with its diaspora. With 7 years of experience, the Ijebu-Ode experiment has been successful in many ways. There is increasing evidence that poverty in the city has been reduced significantly through the microfinancing of existing and new productive activities and the estimated >8,000 jobs these activities have created. Training based on both sustainability science and technology and indigenous practitioner knowledge has been a critical factor in the establishment of cooperatives and the development of new enterprises in specialty crops, small animal, and fish production. Much of this success has been possible as a result of harnessing social capital, especially through the dynamic leadership of the traditional authorities of the city and by the provision of ample loanable funds through the National Poverty Eradication Program of the federal government. The city consultation process itself engendered a participatory focus to the experiment from the beginning and has encouraged sustainabaility. Yet long-term sustainability is still in question as the initial leadership needs replacement, and credit, the heart of the experiment, lacks sufficient collateral. PMID:17942687
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
A Virtual Reality Full Body Illusion Improves Body Image Disturbance in Anorexia Nervosa.
Keizer, Anouk; van Elburg, Annemarie; Helms, Rossa; Dijkerman, H Chris
2016-01-01
Patients with anorexia nervosa (AN) have a persistent distorted experience of the size of their body. Previously we found that the Rubber Hand Illusion improves hand size estimation in this group. Here we investigated whether a Full Body Illusion (FBI) affects body size estimation of body parts more emotionally salient than the hand. In the FBI, analogue to the RHI, participants experience ownership over an entire virtual body in VR after synchronous visuo-tactile stimulation of the actual and virtual body. We asked participants to estimate their body size (shoulders, abdomen, hips) before the FBI was induced, directly after induction and at ~2 hour 45 minutes follow-up. The results showed that AN patients (N = 30) decrease the overestimation of their shoulders, abdomen and hips directly after the FBI was induced. This effect was strongest for estimates of circumference, and also observed in the asynchronous control condition of the illusion. Moreover, at follow-up, the improvements in body size estimation could still be observed in the AN group. Notably, the HC group (N = 29) also showed changes in body size estimation after the FBI, but the effect showed a different pattern than that of the AN group. The results lead us to conclude that the disturbed experience of body size in AN is flexible and can be changed, even for highly emotional body parts. As such this study offers novel starting points from which new interventions for body image disturbance in AN can be developed.
Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.
ERIC Educational Resources Information Center
Thompson, Bruce; Fan, Xitao
This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…
The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases
ERIC Educational Resources Information Center
Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.
2010-01-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…
Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
1993-01-01
A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)
Are Structural Estimates of Auction Models Reasonable? Evidence from Experimental Data
ERIC Educational Resources Information Center
Bajari, Patrick; Hortacsu, Ali
2005-01-01
Recently, economists have developed methods for structural estimation of auction models. Many researchers object to these methods because they find the strict rationality assumptions to be implausible. Using bid data from first-price auction experiments, we estimate four alternative structural models: (1) risk-neutral Bayes-Nash, (2) risk-averse…
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Marine mammal tracks from two-hydrophone acoustic recordings made with a glider
NASA Astrophysics Data System (ADS)
Küsel, Elizabeth T.; Munoz, Tessa; Siderius, Martin; Mellinger, David K.; Heimlich, Sara
2017-04-01
A multinational oceanographic and acoustic sea experiment was carried out in the summer of 2014 off the western coast of the island of Sardinia, Mediterranean Sea. During this experiment, an underwater glider fitted with two hydrophones was evaluated as a potential tool for marine mammal population density estimation studies. An acoustic recording system was also tested, comprising an inexpensive, off-the-shelf digital recorder installed inside the glider. Detection and classification of sounds produced by whales and dolphins, and sometimes tracking and localization, are inherent components of population density estimation from passive acoustics recordings. In this work we discuss the equipment used as well as analysis of the data obtained, including detection and estimation of bearing angles. A human analyst identified the presence of sperm whale (Physeter macrocephalus) regular clicks as well as dolphin clicks and whistles. Cross-correlating clicks recorded on both data channels allowed for the estimation of the direction (bearing) of clicks, and realization of animal tracks. Insights from this bearing tracking analysis can aid in population density estimation studies by providing further information (bearings), which can improve estimates.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-24
... visits, and a student-survey instrument to measure project outcomes. Estimate of Burden Respondents... NATIONAL SCIENCE FOUNDATION Comment Request: Innovative Technology Experiences for Students and... Friday. SUPPLEMENTARY INFORMATION: Title of Collection: Innovative Technology Experiences for Students...
US corn and soybeans exploratory experiment
NASA Technical Reports Server (NTRS)
Carnes, J. G. (Principal Investigator)
1981-01-01
The results from the U.S. corn/soybeans exploratory experiment which was completed during FY 1980 are summarized. The experiment consisted of two parts: the classification procedures verification test and the simulated aggregation test. Evaluations of labeling, proportion estimation, and aggregation procedures are presented.
Epopa, Patric Stephane; Millogo, Abdoul Azize; Collins, Catherine Matilda; North, Ace; Tripet, Frederic; Benedict, Mark Quentin; Diabate, Abdoulaye
2017-08-07
Vector control is a major component of the malaria control strategy. The increasing spread of insecticide resistance has encouraged the development of new tools such as genetic control which use releases of modified male mosquitoes. The use of male mosquitoes as part of a control strategy requires an improved understanding of male mosquito biology, including the factors influencing their survival and dispersal, as well as the ability to accurately estimate the size of a target mosquito population. This study was designed to determine the seasonal variation in population size via repeated mark-release-recapture experiments and to estimate the survival and dispersal of male mosquitoes of the Anopheles gambiae complex in a small west African village. Mark-release-recapture experiments were carried out in Bana Village over two consecutive years, during the wet and the dry seasons. For each experiment, around 5000 (3407-5273) adult male Anopheles coluzzii mosquitoes were marked using three different colour dye powders (red, blue and green) and released in three different locations in the village (centre, edge and outside). Mosquitoes were recaptured at sites spread over the village for seven consecutive days following the releases. Three different capture methods were used: clay pots, pyrethroid spray catches and swarm sampling. Swarm sampling was the most productive method for recapturing male mosquitoes in the field. Population size and survival were estimated by Bayesian analyses of the Fisher-Ford model, revealing an about 10-fold increase in population size estimates between the end of dry season (10,000-50,000) to the wet season (100,000-500,000). There were no detectable seasonal effects on mosquito survival, suggesting that factors other than weather may play an important role. Mosquito dispersal ranged from 40 to 549 m over the seven days of each study and was not influenced by the season, but mainly by the release location, which explained more than 44% of the variance in net dispersal distance. This study clearly shows that male-based MRR experiments can be used to estimate some parameters of wild male populations such as population size, survival, and dispersal and to estimate the spatial patterns of movement in a given locality.
Lockheed L-1101 avionic flight control redundant systems
NASA Technical Reports Server (NTRS)
Throndsen, E. O.
1976-01-01
The Lockheed L-1011 automatic flight control systems - yaw stability augmentation and automatic landing - are described in terms of their redundancies. The reliability objectives for these systems are discussed and related to in-service experience. In general, the availability of the stability augmentation system is higher than the original design requirement, but is commensurate with early estimates. The in-service experience with automatic landing is not sufficient to provide verification of Category 3 automatic landing system estimated availability.
Proceedings of Technical Sessions, Volumes 1 and 2: the LACIE Symposium
NASA Technical Reports Server (NTRS)
1979-01-01
The technical design of the Large Area Crop Inventory Experiment is examined and data acquired over 3 global crop years is analyzed with respect to (1) sampling and aggregation; (2) growth size estimation; (3) classification and mensuration; (4) yield estimation; and (5) accuracy assessment. Seventy-nine papers delivered at conference sessions cover system implementation and operation; data processing systems; experiment results and accuracy; supporting research and technology; and the USDA application test system.
Planned Axial Reorientation Investigation on Sloshsat
NASA Technical Reports Server (NTRS)
Chato, David J.
2000-01-01
This paper details the design and logic of an experimental investigation to study axial reorientation in low gravity. The Sloshsat free-flyer is described. The planned axial reorientation experiments and test matrixes are presented. Existing analytical tools are discussed. Estimates for settling range from 64 to 1127 seconds. The planned experiments are modelled using computational fluid dynamics. These models show promise in reducing settling estimates and demonstrate the ability of pulsed high thrust settling to emulate lower thrust continuous firing.
Ryan, Mandy; Kinghorn, Philip; Entwistle, Vikki A; Francis, Jill J
2014-04-01
Healthcare policy leaders internationally recognise that people's experiences of healthcare delivery are important, and invest significant resources to monitor and improve them. However, the value of particular aspects of experiences of healthcare delivery - relative to each other and to other healthcare outcomes - is unclear. This paper considers how economic techniques have been and might be used to generate quantitative estimates of the value of particular experiences of healthcare delivery. A recently published conceptual map of patients' experiences served to guide the scope and focus of the enquiry. The map represented both what health services and staff are like and do and what individual patients can feel like, be and do (while they are using services and subsequently). We conducted a systematic search for applications of economic techniques to healthcare delivery. We found that these techniques have been quite widely used to estimate the value of features of healthcare systems and processes (e.g. of care delivery by a nurse rather than a doctor, or of a consultation of 10 minutes rather than 15 minutes), but much less to estimate the value of the implications of these features for patients personally. To inform future research relating to the valuation of experiences of healthcare delivery, we organised a workshop for key stakeholders. Participants undertook and discussed 'exercises' that explored the use of different economic techniques to value descriptions of healthcare delivery that linked processes to what patients felt like and were able to be and do. The workshop identified a number of methodological issues that need careful attention, and highlighted some important concerns about the ways in which quantitative estimates of the value of experiences of healthcare delivery might be used. However the workshop confirmed enthusiasm for efforts to attend directly to the implications of healthcare delivery from patients' perspectives, including in terms of their capabilities. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ryan, Mandy; Kinghorn, Philip; Entwistle, Vikki A.; Francis, Jill J.
2014-01-01
Healthcare policy leaders internationally recognise that people's experiences of healthcare delivery are important, and invest significant resources to monitor and improve them. However, the value of particular aspects of experiences of healthcare delivery – relative to each other and to other healthcare outcomes – is unclear. This paper considers how economic techniques have been and might be used to generate quantitative estimates of the value of particular experiences of healthcare delivery. A recently published conceptual map of patients' experiences served to guide the scope and focus of the enquiry. The map represented both what health services and staff are like and do and what individual patients can feel like, be and do (while they are using services and subsequently). We conducted a systematic search for applications of economic techniques to healthcare delivery. We found that these techniques have been quite widely used to estimate the value of features of healthcare systems and processes (e.g. of care delivery by a nurse rather than a doctor, or of a consultation of 10 minutes rather than 15 minutes), but much less to estimate the value of the implications of these features for patients personally. To inform future research relating to the valuation of experiences of healthcare delivery, we organised a workshop for key stakeholders. Participants undertook and discussed ‘exercises’ that explored the use of different economic techniques to value descriptions of healthcare delivery that linked processes to what patients felt like and were able to be and do. The workshop identified a number of methodological issues that need careful attention, and highlighted some important concerns about the ways in which quantitative estimates of the value of experiences of healthcare delivery might be used. However the workshop confirmed enthusiasm for efforts to attend directly to the implications of healthcare delivery from patients' perspectives, including in terms of their capabilities. PMID:24568844
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Disequilibrium and human capital in pharmacy labor markets: evidence from four states.
Cline, Richard R
2003-01-01
To estimate the association between pharmacists' stocks of human capital (work experience and education), practice setting, demographics, and wage rates in the overall labor market and to estimate the association between these same variables and wage rates within six distinct pharmacy employment sectors. Wage estimation is used as a proxy measure of demand for pharmacists' services. Descriptive survey analysis. Illinois, Minnesota, Ohio, and Wisconsin. Licensed pharmacists working 30 or more hours per week. Analysis of data collected with cross-sectional mail surveys conducted in four states. Hourly wage rates for all pharmacists working 30 or more hours per week and hourly wage rates for pharmacists employed in large chain, independent, mass-merchandiser, hospital, health maintenance organization (HMO), and other settings. A total of 2,235 responses were received, for an adjusted response rate of 53.1%. Application of exclusion criteria left 1,450 responses from full-time pharmacists to analyze. Results from estimations of wages in the pooled sample and for pharmacists in the hospital setting suggest that advanced training and years of experience are associated positively with higher hourly wages. Years of experience were also associated positively with higher wages in independent and other settings, while neither advanced education nor experience was related to wages in large chain, mass-merchandiser, or HMO settings. Overall, the market for full-time pharmacists' labor is competitive, and employers pay wage premiums to those with larger stocks of human capital, especially advanced education and more years of pharmacy practice experience. The evidence supports the hypothesis that demand is exceeding supply in select employment sectors.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments.
Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia
2016-09-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.
Pain, Oliver; Dudbridge, Frank; Cardno, Alastair G; Freeman, Daniel; Lu, Yi; Lundstrom, Sebastian; Lichtenstein, Paul; Ronald, Angelica
2018-03-31
This study aimed to test for overlap in genetic influences between psychotic-like experience traits shown by adolescents in the community, and clinically-recognized psychiatric disorders in adulthood, specifically schizophrenia, bipolar disorder, and major depression. The full spectra of psychotic-like experience domains, both in terms of their severity and type (positive, cognitive, and negative), were assessed using self- and parent-ratings in three European community samples aged 15-19 years (Final N incl. siblings = 6,297-10,098). A mega-genome-wide association study (mega-GWAS) for each psychotic-like experience domain was performed. Single nucleotide polymorphism (SNP)-heritability of each psychotic-like experience domain was estimated using genomic-relatedness-based restricted maximum-likelihood (GREML) and linkage disequilibrium- (LD-) score regression. Genetic overlap between specific psychotic-like experience domains and schizophrenia, bipolar disorder, and major depression was assessed using polygenic risk score (PRS) and LD-score regression. GREML returned SNP-heritability estimates of 3-9% for psychotic-like experience trait domains, with higher estimates for less skewed traits (Anhedonia, Cognitive Disorganization) than for more skewed traits (Paranoia and Hallucinations, Parent-rated Negative Symptoms). Mega-GWAS analysis identified one genome-wide significant association for Anhedonia within IDO2 but which did not replicate in an independent sample. PRS analysis revealed that the schizophrenia PRS significantly predicted all adolescent psychotic-like experience trait domains (Paranoia and Hallucinations only in non-zero scorers). The major depression PRS significantly predicted Anhedonia and Parent-rated Negative Symptoms in adolescence. Psychotic-like experiences during adolescence in the community show additive genetic effects and partly share genetic influences with clinically-recognized psychiatric disorders, specifically schizophrenia and major depression. © 2018 The Authors. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics Published by Wiley Periodicals, Inc.
Linguistic and Perceptual Mapping in Spatial Representations: An Attentional Account.
Valdés-Conroy, Berenice; Hinojosa, José A; Román, Francisco J; Romero-Ferreiro, Verónica
2018-03-01
Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near-space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top-down processing in congruency effects from embodied representations of spatial terms. Copyright © 2017 Cognitive Science Society, Inc.
Water vapour tomography using GPS phase observations: Results from the ESCOMPTE experiment
NASA Astrophysics Data System (ADS)
Nilsson, T.; Gradinarsky, L.; Elgered, G.
2007-10-01
Global Positioning System (GPS) tomography is a technique for estimating the 3-D structure of the atmospheric water vapour using data from a dense local network of GPS receivers. Several current methods utilize estimates of slant wet delays between the GPS satellites and the receivers on the ground, which are difficult to obtain with millimetre accuracy from the GPS observations. We present results of applying a new tomographic method to GPS data from the Expériance sur site pour contraindre les modèles de pollution atmosphérique et de transport d'emissions (ESCOMPTE) experiment in southern France. This method does not rely on any slant wet delay estimates, instead it uses the GPS phase observations directly. We show that the estimated wet refractivity profiles estimated by this method is on the same accuracy level or better compared to other tomographic methods. The results are in agreement with earlier simulations, for example the profile information is limited above 4 km.
Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc
2016-01-01
Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
Counteracting estimation bias and social influence to improve the wisdom of crowds.
Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D
2018-04-01
Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
NASA Astrophysics Data System (ADS)
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Multifrequency passive microwave observations of soil moisture in an arid rangeland environment
NASA Technical Reports Server (NTRS)
Jackson, T. J.; Schmugge, T. J.; Parry, R.; Kustas, W. P.; Ritchie, J. C.; Shutko, A. M.; Khaldin, A.; Reutov, E.; Novichikhin, E.; Liberman, B.
1992-01-01
A cooperative experiment was conducted by teams from the U.S. and U.S.S.R. to evaluate passive microwave instruments and algorithms used to estimate surface soil moisture. Experiments were conducted as part of an interdisciplinary experiment in an arid rangeland watershed located in the southwest United States. Soviet microwave radiometers operating at wavelengths of 2.25, 21 and 27 cm were flown on a U.S. aircraft. Radio frequency interference limited usable data to the 2.25 and 21 cm systems. Data have been calibrated and compared to ground observations of soil moisture. These analyses showed that the 21 cm system could produce reliable and useful soil moisture information and that the 2.25 cm system was of no value for soil moisture estimation in this experiment.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
NASA Technical Reports Server (NTRS)
Zoller, L. K.
1982-01-01
Suggested program of material processing experiments in space described in 81 page report. For each experiment, report discusses influence of such gravitational effects as convection, buoyancy, sedimentation, and hydrostatic pressure. Report contains estimates of power and mission duration required for each experiment. Lists necessary equipment and appropriate spacecraft.
An Investigation of Milk Sugar.
ERIC Educational Resources Information Center
Smith, Christopher A.; Dawson, Maureen M.
1987-01-01
Describes an experiment to identify lactose and estimate the concentration of lactose in a sample of milk. Gives a background of the investigation. Details the experimental method, results and calculations. Discusses the implications of the experiment to students. Suggests further experiments using the same technique used in…
Status of Animal Experiments on International Space Station, and Animal Care Activities in Japan
NASA Astrophysics Data System (ADS)
Izumi, Ryutaro; Ishioka, Noriaki; Yumoto, Akane; Ito, Isao; Shirakawa, Masaki
We would like to introduce animal experiments status on International Space Station (ISS) of Japan. Aquatic Habitat (AQH) was launched at 2012 July, by H-II Transfer Vehicle (HTV, ‘Kounotori’) from Tanegashima island in Japan, which could house small fish (Medaka, or Zebrafish) at most three months. First experiment using AQH was carried out for two months from Oct. 26, 2012, and second experiment would start from February, 2014. Mice housing hardware is now under development. For animal care activities, current topic in Japan is self-estimation for animal experiment status by each institute, and to open the result for public. JAXA conducted self-estimation of fiscal year 2011 (from 2011 April until 2012 March) for the first time, and would continue every fiscal year. JAXA already have its own animal care regulation, under animal care law and policy in Japan, and also referred COSPAR animal care guideline. And this year, JAXA made handbook for animal experiments in space (only Japanese).
A mass-density model can account for the size-weight illusion
Bergmann Tiest, Wouter M.; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception. PMID:29447183
Shear wave velocity imaging using transient electrode perturbation: phantom and ex vivo validation.
DeWall, Ryan J; Varghese, Tomy; Madsen, Ernest L
2011-03-01
This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures.
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
Estimating the circuit delay of FPGA with a transfer learning method
NASA Astrophysics Data System (ADS)
Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan
2017-10-01
With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Human-experienced temperature changes exceed global average climate changes for all income groups
NASA Astrophysics Data System (ADS)
Hsiang, S. M.; Parshall, L.
2009-12-01
Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The distribution of temperature changes experienced by the world population between 2011-2030 and 2080-2099. Lower 3 panels: Temperatures experienced 2011-2030 (dashed, circle = mean) and 2080-2099 (solid, cross = mean) by income tercile. The poor do not experience larger changes than the wealthy. However, the poor begin the 21st century at higher temperatures.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
JASMINE project Instrument design and centroiding experiment
NASA Astrophysics Data System (ADS)
Yano, Taihei; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki
JASMINE will study the fundamental structure and evolution of the Milky Way Galaxy. To accomplish these objectives, JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μarcsec at z = 14 mag. In this paper the instrument design (optics, detectors, etc.) of JASMINE is presented. We also show a CCD centroiding experiment for estimating positions of star images. The experimental result shows that the accuracy of estimated distances has a variance of less than 0.01 pixel.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.
1994-01-01
Modified coupled-pair functional (MCPF) calculations and coupled cluster singles and doubles calculations, which include a perturbational estimate of the connected triples [CCSD(T)], yield a bent structure for CuCO, thus, supporting the prediction of a nonlinear structure based on density functional (DF) calculations. Our best estimate for the binding energy is 4.9 +/- 1.4 kcal/mol; this is in better agreement with experiment (6.0 +/- 1.2 kcal/mol) than the DF approach which yields a value (19.6 kcal/mol) significantly larger than experiment.
Subjective Word Frequency Estimates in L1 and L2.
ERIC Educational Resources Information Center
Arnaud, Pierre J. L.
A study investigated the usefulness of non-native speakers' subjective, relative word frequency estimates as a measure of second language proficiency. In the experiment, two subjective frequency estimate (SFE) tasks, one on French and one on English, were presented to French learners of English (n=126) and American learners of French (n=87).…
ERIC Educational Resources Information Center
Unlu, Fatih; Layzer, Carolyn; Clements, Douglas; Sarama, Julie; Cook, David
2013-01-01
Many educational Randomized Controlled Trials (RCTs) collect baseline versions of outcome measures (pretests) to be used in the estimation of impacts at posttest. Although pretest measures are not necessary for unbiased impact estimates in well executed experimental studies, using them increases the precision of impact estimates and reduces sample…
Time estimation by patients with frontal lesions and by Korsakoff amnesics.
Mimura, M; Kinsbourne, M; O'Connor, M
2000-07-01
We studied time estimation in patients with frontal damage (F) and alcoholic Korsakoff (K) patients in order to differentiate between the contributions of working memory and episodic memory to temporal cognition. In Experiment 1, F and K patients estimated time intervals between 10 and 120 s less accurately than matched normal and alcoholic control subjects. F patients were less accurate than K patients at short (< 1 min) time intervals whereas K patients increasingly underestimated durations as intervals grew longer. F patients overestimated short intervals in inverse proportion to their performance on the Wisconsin Card Sorting Test. As intervals grew longer, overestimation yielded to underestimation for F patients. Experiment 2 involved time estimation while counting at a subjective 1/s rate. F patients' subjective tempo, though relatively rapid, did not fully explain their overestimation of short intervals. In Experiment 3, participants produced predetermined time intervals by depressing a mouse key. K patients underproduced longer intervals. F patients produced comparably to normal participants, but were extremely variable. Findings suggest that both working memory and episodic memory play an individual role in temporal cognition. Turnover within a short-term working memory buffer provides a metric for temporal decisions. The depleted working memory that typically attends frontal dysfunction may result in quicker turnover, and this may inflate subjective duration. On the other hand, temporal estimation beyond 30 s requires episodic remembering, and this puts K patients at a disadvantage.
Detection of Catalysis by Taste.
ERIC Educational Resources Information Center
Richman, Robert M.; Villaescusa, Warren
1998-01-01
Outlines the development of a kinetic study using the enzyme Lactaid to hydrolyze the synthetic substrate of lactose into an undergraduate laboratory experiment. Provides students with experience doing order-of-magnitude estimates. (DDR)
Thomas P Holmes; Wiktor L Adamawicz; Fredrik Carlsson
2017-01-01
There has been an explosion of interest during the past two decades in a class of nonmarket stated-preference valuation methods known as choice experiments. The overall objective of a choice experiment is to estimate economic values for characteristics (or attributes) of an environmental good that is the subject of policy analysis, where...
Methods for estimating expected blood alcohol concentration.
DOT National Transportation Integrated Search
1980-12-01
Estimates of blood alcohol concentration (BAC) typically are based on the amount of alcohol consumed per pound body weight. This method fails to consider food intake and body composition, which significantly affect BAC. A laboratory experiment was co...
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1976-01-01
Major activities included coding and verifying equations of motion for the earth-moon system. Some attention was also given to numerical integration methods and parameter estimation methods. Existing analytical theories such as Brown's lunar theory, Eckhardt's theory for lunar rotation, and Newcomb's theory for the rotation of the earth were coded and verified. These theories serve as checks for the numerical integration. Laser ranging data for the period January 1969 - December 1975 was collected and stored on tape. The main goal of this research is the development of software to enable physical parameters of the earth-moon system to be estimated making use of data available from the Lunar Laser Ranging Experiment and the Very Long Base Interferometry experiment of project Apollo. A more specific goal is to develop software for the estimation of certain physical parameters of the moon such as inertia ratios, and the third and fourth harmonic gravity coefficients.
Improving the accuracy of burn-surface estimation.
Nichter, L S; Williams, J; Bryant, C A; Edlich, R F
1985-09-01
A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.
A Discordance Weighting Approach Estimating Occupational and Income Returns to Education.
Andersson, Matthew A
2018-04-23
Schooling differences between identical twins are often utilized as a natural experiment to estimate returns to education. Despite longstanding doubts about the truly random nature of within-twin-pair schooling discordance, such discordance has not yet been understood comprehensively, in terms of diverse between- and within-family peer, academic, familial, social, and health exposures. Here, a predictive analysis using national U.S. midlife twin data shows that within-pair schooling differences are endogenous to a variety of childhood exposures. Using discordance propensities, returns to education under a true natural experiment are simulated. Results for midlife occupation and income reveal differences in estimated returns to education that are statistically insignificant, suggesting that twin-based estimates of causal effects are robust. Moreover, identical and fraternal twins show similar levels of discordance endogeneity and similar responses to propensity weighting, suggesting that the identical twins may not provide demonstrably better leverage in the causal identification of educational returns.
A feature-based inference model of numerical estimation: the split-seed effect.
Murray, Kyle B; Brown, Norman R
2009-07-01
Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.
Constitutive Modeling of Porcine Liver in Indentation Using 3D Ultrasound Imaging
Jordan, P.; Socrate, S.; Zickler, T.E.; Howe, R.D.
2009-01-01
In this work we present an inverse finite-element modeling framework for constitutive modeling and parameter estimation of soft tissues using full-field volumetric deformation data obtained from 3D ultrasound. The finite-element model is coupled to full-field visual measurements by regularization springs attached at nodal locations. The free ends of the springs are displaced according to the locally estimated tissue motion and the normalized potential energy stored in all springs serves as a measure of model-experiment agreement for material parameter optimization. We demonstrate good accuracy of estimated parameters and consistent convergence properties on synthetically generated data. We present constitutive model selection and parameter estimation for perfused porcine liver in indentation and demonstrate that a quasilinear viscoelastic model with shear modulus relaxation offers good model-experiment agreement in terms of indenter displacement (0.19 mm RMS error) and tissue displacement field (0.97 mm RMS error). PMID:19627823
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korbin, G.; Wollenberg, H.; Wilson, C.
Plans for an underground research facility are presented, incorporating techniques to assess the hydrological and thermomechanical response of a rock mass to the introduction and long-term isolation of radioactive waste, and to assess the effects of excavation on the hydrologic integrity of a repository and its subsequent backfill, plugging, and sealing. The project is designed to utilize existing mine or civil works for access to experimental areas and is estimated to last 8 years at a total cost for contruction and operation of $39.0 million (1981 dollars). Performing the same experiments in an existing underground research facility would reduce themore » duration to 7-1/2 years and cost $27.7 million as a lower-bound estimate. These preliminary plans and estimates should be revised after specific sites are identified which would accommodate the facility.« less
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang
2017-07-01
Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.
NASA Astrophysics Data System (ADS)
Beyer, W. K. G.
The estimation accuracy of the group delay measured in a single video frequency band was analyzed as a function of the system bandwidth and the signal to noise ratio. Very long base interferometry (VLBI) measurements from geodetic experiments were used to check the geodetic applicability of the Mark 2 evaluation system. The geodetic observation quantities and the correlation geometry are introduced. The data flow in the VLBI experiment, the correlation analysis, the analyses and evaluation in the MK2 system, and the delay estimation procedure following the least squares method are presented. It is shown that the MK2 system is no longer up to date for geodetic applications. The superiority of the developed estimation method with respect to the interpolation algorithm is demonstrated. The numerical investigations show the deleterious influence of the distorting bit shift effects.
Simple Experiment for Studying the Properties of a Ferromagnetic Material.
ERIC Educational Resources Information Center
Sood, B. R.; And Others
1980-01-01
Describes an undergraduate physics experiment for studying Curie temperature and Curie constant of a ferromagnetic material. The exchange field (Weiss field) has been estimated by using these parameters. (HM)
Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887
VLBI geodesy - 2 parts-per-billion precision in length determinations for transcontinental baselines
NASA Technical Reports Server (NTRS)
Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1988-01-01
VLBI was to make twenty-two independent measurements, between September 1984 and December 1986, of the length of the 3900-km baseline between the Mojave site in California and the Haystack/Westford site in Massachusetts. These experiments differ from the typical geodetic VLBI experiments in that a large fraction of observations is obtained at elevation angles between 4 and 10 deg. Data from these low elevation angles allow the vertical coordinate of site position, and hence the baseline length, to be estimated with greater precision. For the sixteen experiments processed thus far, the weighted root-mean-square scatter of the estimates of the baseline length is 8 mm.
Mroz, T A
1999-10-01
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. LACIE acreage estimates were in close agreement with SRS estimates, and an operational system with a 14 day LANDSAT data turnaround could have produced an accurate acreage estimate (one which satisfied the 90/90 criterion) 1 1/2 to 2 months before harvest. Low yield estimates, resulting from agromet conditions not taken into account in the yield models, caused production estimates to be correspondingly low. However, both yield and production estimates satisfied the LACIE 90/90 criterion for winter wheat in the yardstick region.
Horton, Keith D; Wilson, Daryl E; Vonk, Jennifer; Kirby, Sarah L; Nielsen, Tina
2005-07-01
Using the stem completion task, we compared estimates of automatic retrieval from an implicit memory task, the process dissociation procedure, and the speeded response procedure. Two standard manipulations were employed. In Experiment 1, a depth of processing effect was found on automatic retrieval using the speeded response procedure although this effect was substantially reduced in Experiment 2 when lexical processing was required of all words. In Experiment 3, the speeded response procedure showed an advantage of full versus divided attention at study on automatic retrieval. An implicit condition showed parallel effects in each study, suggesting that implicit stem completion may normally provide a good estimate of automatic retrieval. Also, we replicated earlier findings from the process dissociation procedure, but estimates of automatic retrieval from this procedure were consistently lower than those from the speeded response procedure, except when conscious retrieval was relatively low. We discuss several factors that may contribute to the conflicting outcomes, including the evidence for theoretical assumptions and criterial task differences between implicit and explicit tests.
The 1980 US/Canada wheat and barley exploratory experiment, volume 1
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Prior, H. L.; Payne, R. W.; Disler, J. M.
1983-01-01
The results from the U.S./Canada Wheat and Barley Exploratory Experiment which was completed during FY 1980 are presented. The results indicate that the new crop identification procedures performed well for spring small grains and that they are conductive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology. However, the crop calendars will require additional development and refinements prior to integration into automated area estimation technology. The evaluation showed the integrated technology to be capable of producing accurate and consistent spring small grains proportion estimates. However, barley proportion estimation technology was not satisfactorily evaluated. The low-density segments examined were judged not to give indicative or unequivocal results. It is concluded that, generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analyses to a variety of agricultural and meteorological conditions representative of the global environment. It is further concluded that a strong potential exists for establishing a highly efficient technology or spring small grains.
NASA Technical Reports Server (NTRS)
Sheppard, Albert P.; Wood, Joan M.
1976-01-01
Candidate experiments designed for the space shuttle transportation system and the long duration exposure facility are summarized. The data format covers: experiment title, Experimenter, technical abstract, benefits/justification, technical discussion of experiment approach and objectives, related work and experience, experiment facts space properties used, environmental constraints, shielding requirements, if any, physical description, and sketch of major elements. Information was also included on experiment hardware, research required to develop experiment, special requirements, cost estimate, safety considerations, and interactions with spacecraft and other experiments.
Method for six-legged robot stepping on obstacles by indirect force estimation
NASA Astrophysics Data System (ADS)
Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun
2016-07-01
Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.
A Virtual Reality Full Body Illusion Improves Body Image Disturbance in Anorexia Nervosa
Keizer, Anouk; van Elburg, Annemarie; Helms, Rossa; Dijkerman, H. Chris
2016-01-01
Background Patients with anorexia nervosa (AN) have a persistent distorted experience of the size of their body. Previously we found that the Rubber Hand Illusion improves hand size estimation in this group. Here we investigated whether a Full Body Illusion (FBI) affects body size estimation of body parts more emotionally salient than the hand. In the FBI, analogue to the RHI, participants experience ownership over an entire virtual body in VR after synchronous visuo-tactile stimulation of the actual and virtual body. Methods and Results We asked participants to estimate their body size (shoulders, abdomen, hips) before the FBI was induced, directly after induction and at ~2 hour 45 minutes follow-up. The results showed that AN patients (N = 30) decrease the overestimation of their shoulders, abdomen and hips directly after the FBI was induced. This effect was strongest for estimates of circumference, and also observed in the asynchronous control condition of the illusion. Moreover, at follow-up, the improvements in body size estimation could still be observed in the AN group. Notably, the HC group (N = 29) also showed changes in body size estimation after the FBI, but the effect showed a different pattern than that of the AN group. Conclusion The results lead us to conclude that the disturbed experience of body size in AN is flexible and can be changed, even for highly emotional body parts. As such this study offers novel starting points from which new interventions for body image disturbance in AN can be developed. PMID:27711234
Dealing with gene expression missing data.
Brás, L P; Menezes, J C
2006-05-01
Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.
A Bayesian hierarchical model for discrete choice data in health care.
Antonio, Anna Liza M; Weiss, Robert E; Saigal, Christopher S; Dahan, Ely; Crespi, Catherine M
2017-01-01
In discrete choice experiments, patients are presented with sets of health states described by various attributes and asked to make choices from among them. Discrete choice experiments allow health care researchers to study the preferences of individual patients by eliciting trade-offs between different aspects of health-related quality of life. However, many discrete choice experiments yield data with incomplete ranking information and sparsity due to the limited number of choice sets presented to each patient, making it challenging to estimate patient preferences. Moreover, methods to identify outliers in discrete choice data are lacking. We develop a Bayesian hierarchical random effects rank-ordered multinomial logit model for discrete choice data. Missing ranks are accounted for by marginalizing over all possible permutations of unranked alternatives to estimate individual patient preferences, which are modeled as a function of patient covariates. We provide a Bayesian version of relative attribute importance, and adapt the use of the conditional predictive ordinate to identify outlying choice sets and outlying individuals with unusual preferences compared to the population. The model is applied to data from a study using a discrete choice experiment to estimate individual patient preferences for health states related to prostate cancer treatment.
Internal Clock Processes and the Filled-Duration Illusion
ERIC Educational Resources Information Center
Wearden, John H.; Norton, Roger; Martin, Simon; Montford-Bebb, Oliver
2007-01-01
In 3 experiments, the authors compared duration judgments of filled stimuli (tones) with unfilled ones (intervals defined by clicks or gaps in tones). Temporal generalization procedures (Experiment 1) and verbal estimation procedures (Experiments 2 and 3) all showed that subjective durations of the tones were considerably longer than those of…
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
FEDS - An experiment with a microprocessor-based orbit determination system using TDRS data
NASA Technical Reports Server (NTRS)
Shank, D.; Pajerski, R.
1986-01-01
An experiment in microprocessor-based onboard orbit determination has been conducted at NASA's Goddard Space Flight Center. The experiment collected forward-link observation data in real time from a prototype transponder and performed orbit estimation on a typical low-earth scientific satellite. This paper discusses the hardware and organizational configurations of the experiment, the structure of the onboard software, the mathematical models, and the experiment results.
Connor, L T; Balota, D A; Neely, J H
1992-05-01
Experiment 1 replicated Yaniv and Meyer's (1987) finding that lexical decision and episodic recognition performance was better for words previously yielding high-accessibility levels (a combination of feeling-of-knowing and tip-of-the-tongue ratings) in comparison with those yielding low-accessibility levels in a rare word definition task. Experiment 2 yielded the same pattern even though lexical decisions preceded accessibility estimates by a full week. Experiment 3 dismissed the possibility that the Experiment 2 results may have been due to a long-term influence from the lexical decision task to the rare word judgment task. These results support a model in which Ss (a) retrieve topic familiarity information in making accessibility estimates in the rare word definition task and (b) use this information to modulate lexical decision performance.
A new estimate of average dipole field strength for the last five million years
NASA Astrophysics Data System (ADS)
Cromwell, G.; Tauxe, L.; Halldorsson, S. A.
2013-12-01
The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and temporal distributions, and objectively constrains site level estimates by applying uniform selection requirements on measurement level data. (1) Lawrence, K.P., L. Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. McIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude, Geochemistry Geophysics Geosystems, 10, 2009. (2) Selkin, P.A., L. Tauxe, Long-term variations in palaeointensity, Phil. Trans. R. Soc. Lond., 358, 1065-1088, 2000. (3) Shaar, R., L. Tauxe, Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments, Geochemistry Geophysics Geosystems, 14, 2013
Duncan, Greg J; Morris, Pamela A; Rodrigues, Chris
2011-09-01
Social scientists do not agree on the size and nature of the causal impacts of parental income on children's achievement. We revisit this issue using a set of welfare and antipoverty experiments conducted in the 1990s. We utilize an instrumental variables strategy to leverage the variation in income and achievement that arises from random assignment to the treatment group to estimate the causal effect of income on child achievement. Our estimates suggest that a $1,000 increase in annual income increases young children's achievement by 5%-6% of a standard deviation. As such, our results suggest that family income has a policy-relevant, positive impact on the eventual school achievement of preschool children.
NASA Astrophysics Data System (ADS)
Rasch, Philip J.; Wood, Robert; Ackerman, Thomas P.
2017-04-01
Anthropogenic aerosol impacts on clouds constitute the largest source of uncertainty in radiative forcing of climate, confounding estimates of climate sensitivity to increases in greenhouse gases. Projections of future warming are also thus strongly dependent on estimates of aerosol effects on clouds. I will discuss the opportunities for improving estimates of aerosol effects on clouds from controlled field experiments where aerosol with well understood size, composition, amount, and injection altitude could be introduced to deliberately change cloud properties. This would allow scientific investigation to be performed in a manner much closer to a lab environment, and facilitate the use of models to predict cloud responses ahead of time, testing our understanding of aerosol cloud interactions.
Caçola, Priscila; Gabbard, Carl
2012-04-01
This study examined age-related characteristics associated with tool use in the perception and modulation of peripersonal and extrapersonal space. Seventy-six (76) children representing age groups 7-, 9-, 11 years and 36 adults were presented with two experiments using an estimation of reach paradigm involving arm and tool conditions and a switch-block of the opposite condition. Experiment 1 tested Arm and Tool (20 cm length) estimation and found a significant effect for Age, Space, and an Age × Space interaction (ps < 0.05). Both children and adults were less accurate in extrapersonal space, indicating an overestimation bias. Interestingly, the adjustment period during the switch-block condition was immediate and similar across age. Experiment 2 was similar to Experiment 1 with the exception of using a 40-cm-length tool. Results also revealed an age effect and a difference in Space (ps < 0.05), however, participants underestimated. Speculatively, participants were less confident when presented with a longer tool, even though the adjustment period with both tool lengths was similar. Considered together, these results hint that: (1) children as young as 6 years of age are capable of being as accurate when estimating reach with a tool as they are with their arm, (2) the adjustment period associated with extending and retracting spaces is immediate rather than gradual, and (3) tool length influences estimations of reach.
Aerobic fitness, maturation, and training experience in youth basketball.
Carvalho, Humberto M; Coelho-e-Silva, Manuel J; Eisenmann, Joey C; Malina, Robert M
2013-07-01
Relationships among chronological age (CA), maturation, training experience, and body dimensions with peak oxygen uptake (VO2max) were considered in male basketball players 14-16 y of age. Data for all players included maturity status estimated as percentage of predicted adult height attained at the time of the study (Khamis-Roche protocol), years of training, body dimensions, and VO2max (incremental maximal test on a treadmill). Proportional allometric models derived from stepwise regressions were used to incorporate either CA or maturity status and to incorporate years of formal training in basketball. Estimates for size exponents (95% CI) from the separate allometric models for VO2max were height 2.16 (1.23-3.09), body mass 0.65 (0.37-0.93), and fat-free mass 0.73 (0.46-1.02). Body dimensions explained 39% to 44% of variance. The independent variables in the proportional allometric models explained 47% to 60% of variance in VO2max. Estimated maturity status (11-16% of explained variance) and training experience (7-11% of explained variance) were significant predictors with either body mass or estimated fat-free mass (P ≤ .01) but not with height. Biological maturity status and training experience in basketball had a significant contribution to VO2max via body mass and fat-free fat mass and also had an independent positive relation with aerobic performance. The results highlight the importance of considering variation associated with biological maturation in aerobic performance of late-adolescent boys.
Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J
2008-01-01
Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358
Saatkamp, Arne; Affre, Laurence; Dutoit, Thierry; Poschlod, Peter
2009-09-01
Seed survival in the soil contributes to population persistence and community diversity, creating a need for reliable measures of soil seed bank persistence. Several methods estimate soil seed bank persistence, most of which count seedlings emerging from soil samples. Seasonality, depth distribution and presence (or absence) in vegetation are then used to classify a species' soil seed bank into persistent or transient, often synthesized into a longevity index. This study aims to determine if counts of seedlings from soil samples yield reliable seed bank persistence estimates and if this is correlated to seed production. Seeds of 38 annual weeds taken from arable fields were buried in the field and their viability tested by germination and tetrazolium tests at 6 month intervals for 2.5 years. This direct measure of soil seed survival was compared with indirect estimates from the literature, which use seedling emergence from soil samples to determine seed bank persistence. Published databases were used to explore the generality of the influence of reproductive capacity on seed bank persistence estimates from seedling emergence data. There was no relationship between a species' soil seed survival in the burial experiment and its seed bank persistence estimate from published data using seedling emergence from soil samples. The analysis of complementary data from published databases revealed that while seed bank persistence estimates based on seedling emergence from soil samples are generally correlated with seed production, estimates of seed banks from burial experiments are not. The results can be explained in terms of the seed size-seed number trade-off, which suggests that the higher number of smaller seeds is compensated after germination. Soil seed bank persistence estimates correlated to seed production are therefore not useful for studies on population persistence or community diversity. Confusion of soil seed survival and seed production can be avoided by separate use of soil seed abundance and experimental soil seed survival.
Madenjian, Charles P.; Rediske, Richard R.; O'Keefe, James P.; David, Solomon R.
2014-01-01
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
Comparison of batch sorption tests, pilot studies, and modeling for estimating GAC bed life.
Scharf, Roger G; Johnston, Robert W; Semmens, Michael J; Hozalski, Raymond M
2010-02-01
Saint Paul Regional Water Services (SPRWS) in Saint Paul, MN experiences annual taste and odor episodes during the warm summer months. These episodes are attributed primarily to geosmin that is produced by cyanobacteria growing in the chain of lakes used to convey and store the source water pumped from the Mississippi River. Batch experiments, pilot-scale experiments, and model simulations were performed to determine the geosmin removal performance and bed life of a granular activated carbon (GAC) filter-sorber. Using batch adsorption isotherm parameters, the estimated bed life for the GAC filter-sorber ranged from 920 to 1241 days when challenged with a constant concentration of 100 ng/L of geosmin. The estimated bed life obtained using the AdDesignS model and the actual pilot-plant loading history was 594 days. Based on the pilot-scale GAC column data, the actual bed life (>714 days) was much longer than the simulated values because bed life was extended by biological degradation of geosmin. The continuous feeding of high concentrations of geosmin (100-400 ng/L) in the pilot-scale experiments enriched for a robust geosmin-degrading culture that was sustained when the geosmin feed was turned off for 40 days. It is unclear, however, whether a geosmin-degrading culture can be established in a full-scale filter that experiences taste and odor episodes for only 1 or 2 months per year. The results of this research indicate that care must be exercised in the design and interpretation of pilot-scale experiments and model simulations for predicting taste and odor removal in full-scale GAC filter-sorbers. Adsorption and the potential for biological degradation must be considered to estimate GAC bed life for the conditions of intermittent geosmin loading typically experienced by full-scale systems. (c) 2009 Elsevier Ltd. All rights reserved.
Respondent-Driven Sampling in a Multi-Site Study of Black and Latino Men Who Have Sex with Men.
Murrill, Christopher S; Bingham, Trista; Lauby, Jennifer; Liu, Kai-Lih; Wheeler, Darrell; Carballo-Diéguez, Alex; Marks, Gary; Millett, Gregorio A
2016-02-01
Respondent-driven sampling (RDS) was used to recruit four samples of Black and Latino men who have sex with men (MSM) in three metropolitan areas to measure HIV prevalence and sexual and drug use behaviors. We compared demographic and behavioral risk characteristics of participants across sites, assessed the extent to which the RDS statistical adjustment procedure provides estimates that differ from the crude results, and summarized our experiences using RDS. From June 2005 to March 2006 a total of 2,235 MSM were recruited and interviewed: 614 Black MSM and 516 Latino MSM in New York City, 540 Black MSM in Philadelphia, and 565 Latino MSM in Los Angeles County. Crude point estimates for demographic characteristics, behavioral risk factors and HIV prevalence were calculated for each of the four samples. RDS Analysis Tool was used to obtain population-based estimates of each sampled population's characteristics. RDS adjusted estimates were similar to the crude estimates for each study sample on demographic characteristics such as age, income, education and employment status. Adjusted estimates of the prevalence of risk behaviors were lower than the crude estimates, and for three of the study samples, the adjusted HIV prevalence estimates were lower than the crude estimates. However, even the adjusted HIV prevalence estimates were higher than what has been previously estimated for these groups of MSM in these cities. Each site faced unique circumstances in implementing RDS. Our experience in using RDS among Black and Latino MSM resulted in diverse recruitment patterns and uncertainties in the estimated HIV prevalence and risk behaviors by study site. Copyright © 2016. Published by Elsevier Inc.
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
Estimation and correction of visibility bias in aerial surveys of wintering ducks
Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.
2008-01-01
Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.
Jing, Nan; Li, Chuang; Chong, Yaqin
2017-01-20
An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.
Two Experiments for Estimating Free Convection and Radiation Heat Transfer Coefficients
ERIC Educational Resources Information Center
Economides, Michael J.; Maloney, J. O.
1978-01-01
This article describes two simple undergraduate heat transfer experiments which may reinforce a student's understanding of free convection and radiation. Apparatus, experimental procedure, typical results, and discussion are included. (Author/BB)
NASA Astrophysics Data System (ADS)
Souza, D. M.; Costa, I. A.; Nóbrega, R. A.
2017-10-01
This document presents a detailed study of the performance of a set of digital filters whose implementations are based on the best linear unbiased estimator theory interpreted as a constrained optimization problem that could be relaxed depending on the input signal characteristics. This approach has been employed by a number of recent particle physics experiments for measuring the energy of particle events interacting with their detectors. The considered filters have been designed to measure the peak amplitude of signals produced by their detectors based on the digitized version of such signals. This study provides a clear understanding of the characteristics of those filters in the context of particle physics and, additionally, it proposes a phase related constraint based on the second derivative of the Taylor expansion in order to make the estimator less sensitive to phase variation (phase between the analog signal shaping and its sampled version), which is stronger in asynchronous experiments. The asynchronous detector developed by the ν-Angra Collaboration is used as the basis for this work. Nevertheless, the proposed analysis goes beyond, considering a wide range of conditions related to signal parameters such as pedestal, phase, sampling rate, amplitude resolution, noise and pile-up; therefore crossing the bounds of the ν-Angra Experiment to make it interesting and useful for different asynchronous and even synchronous experiments.
Accident rates for novice glider pilots vs. pilots with experience.
Jarvis, Steve; Harris, Don
2007-12-01
It is a popular notion in gliding that newly soloed pilots have a low accident rate. The intention of this study was to review the support for such a hypothesis from literature and to explore it using UK accident totals and measures of flying exposure. Log sheets from UK gliding clubs were used to estimate flying exposure for inexperienced glider pilots. This was used along with accident data and annual flight statistics for the period 2004-2006 in order to estimate accident rates that could be compared between the pilot groups. The UK accident rate for glider pilots from 2004-2006 was 1 accident in every 3534 launches and 1590 flying hours. The lowest estimated rate for pilots with up to 1 h of experience was 1 accident every 976 launches and 149 h flown. For pilots with up to 10 h of experience the figures were 1 accident in 1274 launches and 503 h. From 2004-2006 UK glider pilots with 10 h or less experience in command had twice the number of accidents per launch and three times as many accidents per hour flown than average for UK glider pilots. Pilots with only 1 h of experience or less were involved in at least 10 times the number of accidents per hour flown than the UK average and had more than 3.5 times the number of accidents per launch.
A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media
Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...
ERIC Educational Resources Information Center
Larsen, Erik; Eriksen, J.
1975-01-01
Describes an experiment wherein the student can demonstrate the existence of all the thiocyanato chromium complexes, estimate the stepwise formation constants, demonstrate the robustness of chromium III complexes, and show the principles of paper electrophoresis. (GS)
Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc
2016-09-08
Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Perceptual constancy in auditory perception of distance to railway tracks.
De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L
2013-07-01
Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.
A recursive Bayesian updating model of haptic stiffness perception.
Wu, Bing; Klatzky, Roberta L
2018-06-01
Stiffness of many materials follows Hooke's Law, but the mechanism underlying the haptic perception of stiffness is not as simple as it seems in the physical definition. The present experiments support a model by which stiffness perception is adaptively updated during dynamic interaction. Participants actively explored virtual springs and estimated their stiffness relative to a reference. The stimuli were simulations of linear springs or nonlinear springs created by modulating a linear counterpart with low-amplitude, half-cycle (Experiment 1) or full-cycle (Experiment 2) sinusoidal force. Experiment 1 showed that subjective stiffness increased (decreased) as a linear spring was positively (negatively) modulated by a half-sinewave force. In Experiment 2, an opposite pattern was observed for full-sinewave modulations. Modeling showed that the results were best described by an adaptive process that sequentially and recursively updated an estimate of stiffness using the force and displacement information sampled over trajectory and time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C
2018-06-01
Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluation of spatial, radiometric and spectral Thematic Mapper performance for coastal studies
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator)
1983-01-01
A series of experiments were initiated to determine the feasibility of using thematic mapper spectral data to estimate wetlands biomass. The experiments were conducted using hand-held radiometers simulating thematic mapper wavebands 3, 4 and 5. Spectral radiance data were collected from the ground and from a low altitude aircraft in an attempt to gain some insight into the potential utility of actual thematic mapper data for biomass estimation in wetland plant communities. In addition, radiative transfer models describing volume reflectance of eight water column containing submerged aquatic vegetation were refined.
Classroom Experiments: Teaching Specific Topics or Promoting the Economic Way of Thinking?
ERIC Educational Resources Information Center
Emerson, Tisha L. N.; English, Linda K.
2016-01-01
The authors' data contain inter- and intra-class variations in experiments to which students in a principles of microeconomics course were exposed. These variations allowed the estimation of the effect on student achievement from the experimental treatment generally, as well as effects associated with participation in specific experiments. The…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.
2013-01-01
Background: Cluster-randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.
2013-01-01
Background: Cluster randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
NASA Technical Reports Server (NTRS)
Shih, Hsin-Yi; Tien, James S.; Ferkul, Paul (Technical Monitor)
2001-01-01
The recently developed numerical model of concurrent-flow flame spread over thin solids has been used as a simulation tool to help the designs of a space experiment. The two-dimensional and three-dimensional, steady form of the compressible Navier-Stokes equations with chemical reactions are solved. With the coupled multi-dimensional solver of the radiative heat transfer, the model is capable of answering a number of questions regarding the experiment concept and the hardware designs. In this paper, the capabilities of the numerical model are demonstrated by providing the guidance for several experimental designing issues. The test matrix and operating conditions of the experiment are estimated through the modeling results. The three-dimensional calculations are made to simulate the flame-spreading experiment with realistic hardware configuration. The computed detailed flame structures provide the insight to the data collection. In addition, the heating load and the requirements of the product exhaust cleanup for the flow tunnel are estimated with the model. We anticipate that using this simulation tool will enable a more efficient and successful space experiment to be conducted.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Identification of the contribution of the ankle and hip joints to multi-segmental balance control
2013-01-01
Background Human stance involves multiple segments, including the legs and trunk, and requires coordinated actions of both. A novel method was developed that reliably estimates the contribution of the left and right leg (i.e., the ankle and hip joints) to the balance control of individual subjects. Methods The method was evaluated using simulations of a double-inverted pendulum model and the applicability was demonstrated with an experiment with seven healthy and one Parkinsonian participant. Model simulations indicated that two perturbations are required to reliably estimate the dynamics of a double-inverted pendulum balance control system. In the experiment, two multisine perturbation signals were applied simultaneously. The balance control system dynamic behaviour of the participants was estimated by Frequency Response Functions (FRFs), which relate ankle and hip joint angles to joint torques, using a multivariate closed-loop system identification technique. Results In the model simulations, the FRFs were reliably estimated, also in the presence of realistic levels of noise. In the experiment, the participants responded consistently to the perturbations, indicated by low noise-to-signal ratios of the ankle angle (0.24), hip angle (0.28), ankle torque (0.07), and hip torque (0.33). The developed method could detect that the Parkinson patient controlled his balance asymmetrically, that is, the right ankle and hip joints produced more corrective torque. Conclusion The method allows for a reliable estimate of the multisegmental feedback mechanism that stabilizes stance, of individual participants and of separate legs. PMID:23433148
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
Shear Wave Velocity Imaging Using Transient Electrode Perturbation: Phantom and ex vivo Validation
Varghese, Tomy; Madsen, Ernest L.
2011-01-01
This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures. PMID:21075719
Display gamma is an important factor in Web image viewing
NASA Astrophysics Data System (ADS)
Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon
2001-06-01
We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.
Comparison of estimates of snow input to a small alpine watershed
R. A. Sommerfeld; R. C. Musselman; G. L. Wooldridge
1990-01-01
We have used five methods to estimate the snow water equivalent input to the Glacier Lakes Ecosystem Experiments Site (GLEES) in south-central Wyoming during the winter 1987-1988 and to obtain an estimate of the errors. The methods are: (1) the Martinec and Rango degree-day method; (2) Wooldridge et al. method of determining the average yearly snowfall from tree...
ERIC Educational Resources Information Center
May, Henry
2014-01-01
Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…
An online calculator for marine phytoplankton iron culturing experiments.
Rivers, Adam R; Rose, Andrew L; Webb, Eric A
2013-10-01
Laboratory experiments with iron offer important insight into the physiology of marine phytoplankton and the biogeochemical cycles they influence. These experiments often rely on chelators to buffer the concentration of available iron, but the buffer can fail when cell density increases, causing the concentration of that iron to drop rapidly. To more easily determine the point when the iron concentration falls, we developed an online calculator to estimate the maximum phytoplankton density that a growth medium can support. The results of the calculator were compared to the numerical simulations of a Fe-limited culture of the diatom Thalassiosira weissflogii (Grunow) Fryxell and Hasle. Modeling reveals that the assumptions behind thermodynamic estimates of unchelated Fe concentration can fail before easily perceptible changes in growth rate, potentially causing physiological changes that could alter the conclusions of culture experiments. The calculator is available at http://www.marsci.uga.edu/fidoplankter. © 2013 Phycological Society of America.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Overcoming default categorical bias in spatial memory.
Sampaio, Cristina; Wang, Ranxiao Frances
2010-12-01
In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.
Abstract knowledge versus direct experience in processing of binomial expressions
Morgan, Emily; Levy, Roger
2016-01-01
We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. PMID:27776281
Metamodels for Ozone: Comparison of Three Estimation Techniques
A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...
Evaluation of Fuel Oxygenate Degradation in the Vadose Zone
2005-03-01
Goltz (Member) date AFIT/GES/ENV/05M-05 Abstract Groundwater contamination by petroleum products poses a potential human health...this experiment. The column porosity was estimated from work conducted by a contractor, Jason Lach. An estimate of the column soil porosity
A balancing act: physical balance, through arousal, influences size perception.
Geuss, Michael N; Stefanucci, Jeanine K; de Benedictis-Kessner, Justin; Stevens, Nicholas R
2010-10-01
Previous research has demonstrated that manipulating vision influences balance. Here, we question whether manipulating balance can influence vision and how it may influence vision--specifically, the perception of width. In Experiment 1, participants estimated the width of beams while balanced and unbalanced. When unbalanced, participants judged the widths to be smaller. One possible explanation is that unbalanced participants did not view the stimulus as long as when balanced because they were focused on remaining balanced. In Experiment 2, we tested this notion by limiting viewing time. Experiment 2 replicated the findings of Experiment 1, but viewing time had no effect on width judgments. In Experiment 3, participants' level of arousal was manipulated, because the balancing task likely produced arousal. While jogging, participants judged the beams to be smaller. In Experiment 4, participants completed another arousing task (counting backward by sevens) that did not involve movement. Again, participants judged the beams to be smaller when aroused. Experiment 5A raised participants' level of arousal before estimating the board widths (to control for potential dual-task effects) and showed that heightened arousal still influenced perceived width of the boards. Collectively, heightened levels of arousal, caused by multiple manipulations (including balance), influenced perceived width.
Gonçalves, Marcio A D; Tokach, Mike D; Dritz, Steve S; Bello, Nora M; Touchette, Kevin J; Goodband, Robert D; DeRouchey, Joel M; Woodworth, Jason C
2018-03-06
Two experiments were conducted to estimate the standardized ileal digestible valine:lysine (SID Val:Lys) dose response effects in 25- to 45-kg pigs under commercial conditions. In experiment 1, a total of 1,134 gilts (PIC 337 × 1050), initially 31.2 kg ± 2.0 kg body weight (BW; mean ± SD) were used in a 19-d growth trial with 27 pigs per pen and seven pens per treatment. In experiment 2, a total of 2,100 gilts (PIC 327 × 1050), initially 25.4 ± 1.9 kg BW were used in a 22-d growth trial with 25 pigs per pen and 12 pens per treatment. Treatments were blocked by initial BW in a randomized complete block design. In experiment 1, there were a total of six dietary treatments with SID Val at 59.0, 62.5, 65.9, 69.6, 73.0, and 75.5% of Lys and for experiment 2 there were a total of seven dietary treatments with SID Val at 57.0, 60.6, 63.9, 67.5, 71.1, 74.4, and 78.0% of Lys. Experimental diets were formulated to ensure that Lys was the second limiting amino acid throughout the experiments. Initially, linear mixed models were fitted to data from each experiment. Then, data from the two experiments were combined to estimate dose-responses using a broken-line linear ascending (BLL) model, broken-line quadratic ascending (BLQ) model, or quadratic polynomial (QP). Model fit was compared using Bayesian information criterion (BIC). In experiment 1, ADG increased linearly (P = 0.009) with increasing SID Val:Lys with no apparent significant impact on G:F. In experiment 2, ADG and ADFI increased in a quadratic manner (P < 0.002) with increasing SID Val:Lys whereas G:F increased linearly (P < 0.001). Overall, the best-fitting model for ADG was a QP, whereby the maximum mean ADG was estimated at a 73.0% (95% CI: [69.5, >78.0%]) SID Val:Lys. For G:F, the overall best-fitting model was a QP with maximum estimated mean G:F at 69.0% (95% CI: [64.0, >78.0]) SID Val:Lys ratio. However, 99% of the maximum mean performance for ADG and G:F were achieved at, 68% and 63% SID Val:Lys ratio, respectively. Therefore, the SID Val:Lys requirement ranged from73.0% for maximum ADG to 63.2% SID Val:Lys to achieve 99% of maximum G:F in 25- to 45-kg BW pigs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Estimating At-Sea Mortality of Marine Turtles from Stranding Frequencies and Drifter Experiments
Koch, Volker; Peckham, Hoyt; Mancini, Agnese; Eguchi, Tomoharu
2013-01-01
Strandings of marine megafauna can provide valuable information on cause of death at sea. However, as stranding probabilities are usually very low and highly variable in space and time, interpreting the results can be challenging. We evaluated the magnitude and distribution of at-sea mortality of marine turtles along the Pacific coast of Baja California Sur, México during 2010–11, using a combination of counting stranded animals and drifter experiments. A total of 594 carcasses were found during the study period, with loggerhead (62%) and green turtles (31%) being the most common species. 87% of the strandings occurred in the southern Gulf of Ulloa, a known hotspot of loggerhead distribution in the Eastern Pacific. While only 1.8% of the deaths could be definitively attributed to bycatch (net marks, hooks), seasonal variation in stranding frequencies closely corresponded to the main fishing seasons. Estimated stranding probabilities from drifter experiments varied among sites and trials (0.05–0.8), implying that only a fraction of dead sea turtles can be observed at beaches. Total mortality estimates for 15-day periods around the floater trials were highest for PSL, a beach in the southern Gulf of Ulloa, ranging between 11 sea turtles in October 2011 to 107 in August 2010. Loggerhead turtles were the most numerous, followed by green and olive ridley turtles. Our study showed that drifter trials combined with beach monitoring can provide estimates for death at sea to measure the impact of small-scale fisheries that are notoriously difficult to monitor for by-catch. We also provided recommendations to improve the precision of the mortality estimates for future studies and highlight the importance of estimating impacts of small–scale fisheries on marine megafauna. PMID:23483880
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by representational format and personality type, and the influence of format appears to be modifiable and content dependent.
Duncan, Greg J.; Morris, Pamela A.; Rodrigues, Chris
2011-01-01
Social scientists do not agree on the size and nature of the causal impacts of parental income on children's achievement. We revisit this issue using a set of welfare and antipoverty experiments conducted in the 1990s. We utilize an instrumental variables strategy to leverage the variation in income and achievement that arises from random assignment to the treatment group to estimate the causal effect of income on child achievement. Our estimates suggest that a $1,000 increase in annual income increases young children's achievement by 5%–6% of a standard deviation. As such, our results suggest that family income has a policy-relevant, positive impact on the eventual school achievement of preschool children. PMID:21688900
Seeing mountains in mole hills: geographical-slant perception
NASA Technical Reports Server (NTRS)
Proffitt, D. R.; Creem, S. H.; Zosh, W. D.; Kaiser, M. K. (Principal Investigator)
2001-01-01
When observers face directly toward the incline of a hill, their awareness of the slant of the hill is greatly overestimated, but motoric estimates are much more accurate. The present study examined whether similar results would be found when observers were allowed to view the side of a hill. Observers viewed the cross-sections of hills in real (Experiment 1) and virtual (Experiment 2) environments and estimated the inclines with verbal estimates, by adjusting the cross-section of a disk, and by adjusting a board with their unseen hand to match the inclines. We found that the results for cross-section viewing replicated those found when observers directly face the incline. Even though the angles of hills are directly evident when viewed from the side, slant perceptions are still grossly overestimated.
NASA Astrophysics Data System (ADS)
Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad
2015-10-01
In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
Wu, Zhihong; Lu, Ke; Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.
Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
The OSIRIS-REx Radio Science Experiment at Bennu
NASA Astrophysics Data System (ADS)
McMahon, J. W.; Scheeres, D. J.; Hesar, S. G.; Farnocchia, D.; Chesley, S.; Lauretta, D.
2018-02-01
The OSIRIS-REx mission will conduct a Radio Science investigation of the asteroid Bennu with a primary goal of estimating the mass and gravity field of the asteroid. The spacecraft will conduct proximity operations around Bennu for over 1 year, during which time radiometric tracking data, optical landmark tracking images, and altimetry data will be obtained that can be used to make these estimates. Most significantly, the main Radio Science experiment will be a 9-day arc of quiescent operations in a 1-km nominally circular terminator orbit. The pristine data from this arc will allow the Radio Science team to determine the significant components of the gravity field up to the fourth spherical harmonic degree. The Radio Science team will also be responsible for estimating the surface accelerations, surface slopes, constraints on the internal density distribution of Bennu, the rotational state of Bennu to confirm YORP estimates, and the ephemeris of Bennu that incorporates a detailed model of the Yarkovsky effect.
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software.
Lancsar, Emily; Fiebig, Denzil G; Hole, Arne Risa
2017-07-01
We provide a user guide on the analysis of data (including best-worst and best-best data) generated from discrete-choice experiments (DCEs), comprising a theoretical review of the main choice models followed by practical advice on estimation and post-estimation. We also provide a review of standard software. In providing this guide, we endeavour to not only provide guidance on choice modelling but to do so in a way that provides a 'way in' for researchers to the practicalities of data analysis. We argue that choice of modelling approach depends on the research questions, study design and constraints in terms of quality/quantity of data and that decisions made in relation to analysis of choice data are often interdependent rather than sequential. Given the core theory and estimation of choice models is common across settings, we expect the theoretical and practical content of this paper to be useful to researchers not only within but also beyond health economics.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Large capacity temporary visual memory.
Endress, Ansgar D; Potter, Mary C
2014-04-01
Visual working memory (WM) capacity is thought to be limited to 3 or 4 items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor--proactive interference--is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation of 5-21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited, as in WM experiments, or has the much larger capacity found in the present experiments.
NASA Astrophysics Data System (ADS)
Kitahara, Yu; Yamamoto, Yuhji; Ohno, Masao; Kuwahara, Yoshihiro; Kameda, Shuichi; Hatakeyama, Tadahiro
2018-05-01
Paleomagnetic information reconstructed from archeological materials can be utilized to estimate the archeological age of excavated relics, in addition to revealing the geomagnetic secular variation and core dynamics. The direction and intensity of the Earth's magnetic field (archeodirection and archeointensity) can be ascertained using different methods, many of which have been proposed over the past decade. Among the new experimental techniques for archeointensity estimates is the Tsunakawa-Shaw method. This study demonstrates the validity of the Tsunakawa-Shaw method to reconstruct archeointensity from samples of baked clay from archeological relics. The validity of the approach was tested by comparison with the IZZI-Thellier method. The intensity values obtained coincided at the standard deviation (1 σ) level. A total of 8 specimens for the Tsunakawa-Shaw method and 16 specimens for the IZZI-Thellier method, from 8 baked clay blocks, collected from the surface of the kiln were used in these experiments. Among them, 8 specimens (for the Tsunakawa-Shaw method) and 3 specimens (for the IZZI-Thellier method) passed a set of strict selection criteria used in the final evaluation of validity. Additionally, we performed rock magnetic experiments, mineral analysis, and paleodirection measurement to evaluate the suitability of the baked clay samples for paleointensity experiments and hence confirmed that the sample properties were ideal for performing paleointensity experiments. It is notable that the newly estimated archaomagnetic intensity values are lower than those in previous studies that used other paleointensity methods for the tenth century in Japan. [Figure not available: see fulltext.
PIV study of flow through porous structure using refractive index matching
NASA Astrophysics Data System (ADS)
Häfeli, Richard; Altheimer, Marco; Butscher, Denis; Rudolf von Rohr, Philipp
2014-05-01
An aqueous solution of sodium iodide and zinc iodide is proposed as a fluid that matches the refractive index of a solid manufactured by rapid prototyping. This enabled optical measurements in single-phase flow through porous structures. Experiments were also done with an organic index-matching fluid (anisole) in porous structures of different dimensions. To compare experiments with different viscosities and dimensions, we employed Reynolds similarity to deduce the scaling laws. One of the target quantities of our investigation was the dissipation rate of turbulent kinetic energy. Different models for the dissipation rate estimation were evaluated by comparing isotropy ratios. As in many other studies also, our experiments were not capable of resolving the velocity field down to the Kolmogorov length scale, and therefore, the dissipation rate has to be considered as underestimated. This is visible in experiments of different relative resolutions. However, being near the Kolmogorov scale allows estimating a reproducible, yet underestimated spatial distribution of dissipation rate inside the porous structure. Based on these results, the model was used to estimate the turbulent diffusivity. Comparing it to the dispersion coefficient obtained in the same porous structure, we conclude that even at the turbulent diffusivity makes up only a small part of mass transfer in axial direction. The main part is therefore attributed to Taylor dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott Stewart, D., E-mail: dss@illinois.edu; Hernández, Alberto; Lee, Kibaek
The estimation of pressure and temperature histories, which are required to understand chemical pathways in condensed phase explosives during detonation, is discussed. We argue that estimates made from continuum models, calibrated by macroscopic experiments, are essential to inform modern, atomistic-based reactive chemistry simulations at detonation pressures and temperatures. We present easy to implement methods for general equation of state and arbitrarily complex chemical reaction schemes that can be used to compute reactive flow histories for the constant volume, the energy process, and the expansion process on the Rayleigh line of a steady Chapman-Jouguet detonation. A brief review of state-of-the-art ofmore » two-component reactive flow models is given that highlights the Ignition and Growth model of Lee and Tarver [Phys. Fluids 23, 2362 (1980)] and the Wide-Ranging Equation of State model of Wescott, Stewart, and Davis [J. Appl. Phys. 98, 053514 (2005)]. We discuss evidence from experiments and reactive molecular dynamic simulations that motivate models that have several components, instead of the two that have traditionally been used to describe the results of macroscopic detonation experiments. We present simplified examples of a formulation for a hypothetical explosive that uses simple (ideal) equation of state forms and detailed comparisons. Then, we estimate pathways computed from two-component models of real explosive materials that have been calibrated with macroscopic experiments.« less
Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei; ...
2016-03-01
A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, we describe algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment. A simple theoretical framework [Terwilliger et al.(2016),Acta Cryst.D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. Thephenix.plan_sad_experimenttool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimatemore » the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. Thephenix.scale_and_mergetool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and thephenix.anomalous_signaltool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei
A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, we describe algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment. A simple theoretical framework [Terwilliger et al.(2016),Acta Cryst.D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. Thephenix.plan_sad_experimenttool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimatemore » the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. Thephenix.scale_and_mergetool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and thephenix.anomalous_signaltool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.« less
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Scherrer, Stephen R; Rideout, Brendan P; Giorli, Giacomo; Nosal, Eva-Marie; Weng, Kevin C
2018-01-01
Passive acoustic telemetry using coded transmitter tags and stationary receivers is a popular method for tracking movements of aquatic animals. Understanding the performance of these systems is important in array design and in analysis. Close proximity detection interference (CPDI) is a condition where receivers fail to reliably detect tag transmissions. CPDI generally occurs when the tag and receiver are near one another in acoustically reverberant settings. Here we confirm transmission multipaths reflected off the environment arriving at a receiver with sufficient delay relative to the direct signal cause CPDI. We propose a ray-propagation based model to estimate the arrival of energy via multipaths to predict CPDI occurrence, and we show how deeper deployments are particularly susceptible. A series of experiments were designed to develop and validate our model. Deep (300 m) and shallow (25 m) ranging experiments were conducted using Vemco V13 acoustic tags and VR2-W receivers. Probabilistic modeling of hourly detections was used to estimate the average distance a tag could be detected. A mechanistic model for predicting the arrival time of multipaths was developed using parameters from these experiments to calculate the direct and multipath path lengths. This model was retroactively applied to the previous ranging experiments to validate CPDI observations. Two additional experiments were designed to validate predictions of CPDI with respect to combinations of deployment depth and distance. Playback of recorded tags in a tank environment was used to confirm multipaths arriving after the receiver's blanking interval cause CPDI effects. Analysis of empirical data estimated the average maximum detection radius (AMDR), the farthest distance at which 95% of tag transmissions went undetected by receivers, was between 840 and 846 m for the deep ranging experiment across all factor permutations. From these results, CPDI was estimated within a 276.5 m radius of the receiver. These empirical estimations were consistent with mechanistic model predictions. CPDI affected detection at distances closer than 259-326 m from receivers. AMDR determined from the shallow ranging experiment was between 278 and 290 m with CPDI neither predicted nor observed. Results of validation experiments were consistent with mechanistic model predictions. Finally, we were able to predict detection/nondetection with 95.7% accuracy using the mechanistic model's criterion when simulating transmissions with and without multipaths. Close proximity detection interference results from combinations of depth and distance that produce reflected signals arriving after a receiver's blanking interval has ended. Deployment scenarios resulting in CPDI can be predicted with the proposed mechanistic model. For deeper deployments, sea-surface reflections can produce CPDI conditions, resulting in transmission rejection, regardless of the reflective properties of the seafloor.
Body size estimation of self and others in females varying in BMI.
Thaler, Anne; Geuss, Michael N; Mölbert, Simone C; Giel, Katrin E; Streuber, Stephan; Romero, Javier; Black, Michael J; Mohler, Betty J
2018-01-01
Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies.
Body size estimation of self and others in females varying in BMI
Geuss, Michael N.; Mölbert, Simone C.; Giel, Katrin E.; Streuber, Stephan; Romero, Javier; Black, Michael J.; Mohler, Betty J.
2018-01-01
Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies. PMID:29425218
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
A prospective earthquake forecast experiment in the western Pacific
NASA Astrophysics Data System (ADS)
Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan
2012-09-01
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
An Educational Software for Simulating the Sample Size of Molecular Marker Experiments
ERIC Educational Resources Information Center
Helms, T. C.; Doetkott, C.
2007-01-01
We developed educational software to show graduate students how to plan molecular marker experiments. These computer simulations give the students feedback on the precision of their experiments. The objective of the software was to show students using a hands-on approach how: (1) environmental variation influences the range of the estimates of the…
ERIC Educational Resources Information Center
Tipton, Elizabeth
2013-01-01
As a result of the use of random assignment to treatment, randomized experiments typically have high internal validity. However, units are very rarely randomly selected from a well-defined population of interest into an experiment; this results in low external validity. Under nonrandom sampling, this means that the estimate of the sample average…
The 1980 US/Canada wheat and barley exploratory experiment. Volume 2: Addenda
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Prior, H. L.; Payne, R. W.; Disler, J. M.
1983-01-01
Three study areas supporting the U.S./Canada Wheat and Barley Exploratory Experiment are discussed including an evaluation of the experiment shakedown test analyst labeling results, an evaluation of the crop proportion estimate procedure 1A component, and the evaluation of spring wheat and barley crop calendar models for the 1979 crop year.
ERIC Educational Resources Information Center
Pavel, Ioana E.; Alnajjar, Khadijeh S.; Monahan, Jennifer L.; Stahler, Adam; Hunter, Nora E.; Weaver, Kent M.; Baker, Joshua D.; Meyerhoefer, Allie J.; Dolson, David A.
2012-01-01
A novel laboratory experiment was successfully implemented for undergraduate and graduate students in physical chemistry and nanotechnology. The main goal of the experiment was to rigorously determine the surface-enhanced Raman scattering (SERS)-based sensing capabilities of colloidal silver nanoparticles (AgNPs). These were quantified by…
Horizontal mixing in the Southern Ocean from Argo float trajectories
NASA Astrophysics Data System (ADS)
Roach, Christopher J.; Balwada, Dhruv; Speer, Kevin
2016-08-01
We provide the first observational estimate of the circumpolar distribution of cross-stream eddy diffusivity at 1000 m in the Southern Ocean using Argo float trajectories. We show that Argo float trajectories, from the float surfacing positions, can be used to estimate lateral eddy diffusivities in the ocean and that these estimates are comparable to those obtained from RAFOS floats, where they overlap. Using the Southern Ocean State Estimate (SOSE) velocity fields to advect synthetic particles with imposed behavior that is "Argo-like" and "RAFOS-like" diffusivity estimates from both sets of synthetic particles agreed closely at the three dynamically very different test sites, the Kerguelen Island region, the Southeast Pacific Ocean, and the Scotia Sea, and support our approach. Observed cross-stream diffusivities at 1000 m, calculated from Argo float trajectories, ranged between 300 and 2500 m2 s-1, with peaks corresponding to topographic features associated with the Scotia Sea, the Kerguelen Plateau, the Campbell Plateau, and the Southeast Pacific Ridge. These observational estimates agree with previous regional estimates from the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) near the Drake Passage, and other estimates from natural tracers (helium), inverse modeling studies, and current meter measurements. These estimates are also compared to the suppressed eddy diffusivity in the presence of mean flows. The comparison suggests that away from regions of strong topographic steering suppression explains both the structure and magnitude of eddy diffusivity but that eddy diffusivities in the regions of topographic steering are greater than what would be theoretically expected and the ACC experiences localized enhanced cross-stream mixing in these regions.
Kamoi, Shun; Pretty, Christopher; Docherty, Paul; Squire, Dougie; Revie, James; Chiew, Yeong Shiong; Desaive, Thomas; Shaw, Geoffrey M; Chase, J Geoffrey
2014-01-01
Accurate, continuous, left ventricular stroke volume (SV) measurements can convey large amounts of information about patient hemodynamic status and response to therapy. However, direct measurements are highly invasive in clinical practice, and current procedures for estimating SV require specialized devices and significant approximation. This study investigates the accuracy of a three element Windkessel model combined with an aortic pressure waveform to estimate SV. Aortic pressure is separated into two components capturing; 1) resistance and compliance, 2) characteristic impedance. This separation provides model-element relationships enabling SV to be estimated while requiring only one of the three element values to be known or estimated. Beat-to-beat SV estimation was performed using population-representative optimal values for each model element. This method was validated using measured SV data from porcine experiments (N = 3 female Pietrain pigs, 29-37 kg) in which both ventricular volume and aortic pressure waveforms were measured simultaneously. The median difference between measured SV from left ventricle (LV) output and estimated SV was 0.6 ml with a 90% range (5th-95th percentile) -12.4 ml-14.3 ml. During periods when changes in SV were induced, cross correlations in between estimated and measured SV were above R = 0.65 for all cases. The method presented demonstrates that the magnitude and trends of SV can be accurately estimated from pressure waveforms alone, without the need for identification of complex physiological metrics where strength of correlations may vary significantly from patient to patient.
Influence of defects on the thermal conductivity of compressed LiF
Jones, R. E.; Ward, D. K.
2018-02-08
We report defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a largemore » sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. Also, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.« less
Neuman, Tzahi; Neuman, Einat
2009-10-01
This article aims to estimate the preference structure of attributes of maternity-wards, as expressed by Israeli women who recently gave birth. The estimation is based on data generated by an experiment that simulates actual choices of hospital (Discrete Choice experiment). The sample includes 323 women who gave birth in the year 2003 in three large hospitals in the central part of Israel (The Sheba Medical Center, Tel Hashomer; The Rabin Medical Center, Beilinson Campus in Petah Tikva; and the Meir Medical Center in Kfar Saba). It was found that the ranking of attributes (in a descending order in terms of importance) is: professionalism of the staff, attention of staff towards the patient, transfer of information, and travel time from residence to the hospital. Most women who gave birth did not care about the number of beds in the room at hospitalization (that represents the physical facilities). The technique used in this study is unique and can assist policymakers in allocation of funds, analysis of new policies even before they have been implemented and inclusion of new services.
Influence of defects on the thermal conductivity of compressed LiF
NASA Astrophysics Data System (ADS)
Jones, R. E.; Ward, D. K.
2018-02-01
Defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a large sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. In addition, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.
The Role of Noncriterial Recollection in Estimating Recollection and Familiarity
Parks, Colleen M.
2007-01-01
Noncriterial recollection (ncR) is recollection of details that are irrelevant to task demands. It has been shown to elevate familiarity estimates and to be functionally equivalent to familiarity in the process dissociation procedure (Yonelinas & Jacoby, 1996). However, Toth and Parks (2006) found no ncR in older adults, and hypothesized that this absence was related to older adults’ criterial recollection deficit. To test this hypothesis, as well as whether ncR is functionally equivalent to familiarity and increases the subjective experience of familiarity, remember-know and confidence-rating methods were used to estimate recollection and familiarity with young adults, young adults in a divided-attention condition (Experiment 1), and older adults. Supporting Toth and Parks’ hypothesis, ncR was found in all groups, but was consistently larger for groups with higher criterial recollection. Response distributions and receiver-operating characteristics revealed further similarities to criterial recollection and suggested that neither the experience nor usefulness of familiarity was enhanced by ncR. Overall, the results suggest that ncR does not differ fundamentally from criterial recollection. PMID:18591986
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Iliff, Kenneth
2002-01-01
A maximum-likelihood output-error parameter estimation technique is used to obtain stability and control derivatives for the NASA Dryden Flight Research Center SR-71A airplane and for configurations that include experiments externally mounted to the top of the fuselage. This research is being done as part of the envelope clearance for the new experiment configurations. Flight data are obtained at speeds ranging from Mach 0.4 to Mach 3.0, with an extensive amount of test points at approximately Mach 1.0. Pilot-input pitch and yaw-roll doublets are used to obtain the data. This report defines the parameter estimation technique used, presents stability and control derivative results, and compares the derivatives for the three configurations tested. The experimental configurations studied generally show acceptable stability, control, trim, and handling qualities throughout the Mach regimes tested. The reduction of directional stability for the experimental configurations is the most significant aerodynamic effect measured and identified as a design constraint for future experimental configurations. This report also shows the significant effects of aircraft flexibility on the stability and control derivatives.
Influence of defects on the thermal conductivity of compressed LiF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R. E.; Ward, D. K.
We report defect formation in LiF, which is used as an observation window in ramp and shock experiments, has significant effects on its transmission properties. Given the extreme conditions of the experiments it is hard to measure the change in transmission directly. Using molecular dynamics, we estimate the change in conductivity as a function of the concentration of likely point and extended defects using a Green-Kubo technique with careful treatment of size effects. With this data, we form a model of the mean behavior and its estimated error; then, we use this model to predict the conductivity of a largemore » sample of defective LiF resulting from a direct simulation of ramp compression as a demonstration of the accuracy of its predictions. Given estimates of defect densities in a LiF window used in an experiment, the model can be used to correct the observations of thermal energy through the window. Also, the methodology we develop is extensible to modeling, with quantified uncertainty, the effects of a variety of defects on the thermal conductivity of solid materials.« less
The performance of a simple degree-day estimate of snow accumulation to an alpine watershed
R. A. Sommerfeld; R. C. Musselman; G. L. Wooldridge; M. A. Conrad
1991-01-01
We estimated the yearly snow accumulation to the Glacier Lakes Ecosystems Experiments Site (GLEES) for the winters of 1987-88, 1988-89, and 1989-90, using a simple degree-day model developed by J. Martinec and A. Rango. Comparisons with other data indicate that the estimates are accurate. In particular, a calibration with an intensive snow core-probe survey in 1989-90...
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.
Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford
2015-08-07
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics
2016-01-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888
NASA Astrophysics Data System (ADS)
Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo
This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-01-01
Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.
Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.
Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke
2014-09-01
The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.
PROPOSED EPA SSO RULE AND THE NATIONAL SSO PROBLEM
It is estimated there are about 40,000 sanitary-sewer overflow (SSO) events nationwide yearly. In 1995, 65% of the 79 large municipalities responding to a national survey experiences SSOs. Another study estimated that approximately 75% of the nation's SS systems function at 50% o...
Spacelab cryogenic propellant management experiment
NASA Technical Reports Server (NTRS)
Cady, E. C.
1976-01-01
The conceptual design of a Spacelab cryogen management experiment was performed to demonstrate toe desirability and feasibility of subcritical cryogenic fluid orbital storage and supply. A description of the experimental apparatus, definition of supporting requirements, procedures, data analysis, and a cost estimate are included.
An approach to software cost estimation
NASA Technical Reports Server (NTRS)
Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.
1984-01-01
A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.
Mechanical Impedance of the Non-loaded Lower Leg with Relaxed Muscles in the Transverse Plane
Ficanha, Evandro Maicon; Ribeiro, Guilherme Aramizo; Rastgaar, Mohammad
2015-01-01
This paper describes the protocols and results of the experiments for the estimation of the mechanical impedance of the humans’ lower leg in the External–Internal direction in the transverse plane under non-load bearing condition and with relaxed muscles. The objectives of the estimation of the lower leg’s mechanical impedance are to facilitate the design of passive and active prostheses with mechanical characteristics similar to the humans’ lower leg, and to define a reference that can be compared to the values from the patients suffering from spasticity. The experiments were performed with 10 unimpaired male subjects using a lower extremity rehabilitation robot (Anklebot, Interactive Motion Technologies, Inc.) capable of applying torque perturbations to the foot. The subjects were in a seated position, and the Anklebot recorded the applied torques and the resulting angular movement of the lower leg. In this configuration, the recorded dynamics are due mainly to the rotations of the ankle’s talocrural and the subtalar joints, and any contribution of the tibiofibular joints and knee joint. The dynamic mechanical impedance of the lower leg was estimated in the frequency domain with an average coherence of 0.92 within the frequency range of 0–30 Hz, showing a linear correlation between the displacement and the torques within this frequency range under the conditions of the experiment. The mean magnitude of the stiffness of the lower leg (the impedance magnitude averaged in the range of 0–1 Hz) was determined as 4.9 ± 0.74 Nm/rad. The direct estimation of the quasi-static stiffness of the lower leg results in the mean value of 5.8 ± 0.81 Nm/rad. An analysis of variance shows that the estimated values for the stiffness from the two experiments are not statistically different. PMID:26697424
Ju, Xiangqun; Jamieson, Lisa M; Mejia, Gloria C
2016-12-01
To estimate the effect of mothers' education on Indigenous Australian children's dental caries experience while controlling for the mediating effect of children's sweet food intake. The Longitudinal Study of Indigenous Children is a study of two representative cohorts of Indigenous Australian children, aged from 6 months to 2 years (baby cohort) and from 3.5 to 5 years (child cohort) at baseline. The children's primary caregiver undertook a face-to-face interview in 2008 and repeated annually for the next 4 years. Data included household demographics, child health (nutrition information and dental health), maternal conditions and highest qualification levels. Mother's educational level was classified into four categories: 0-9 years, 10 years, 11-12 years and >12 years. Children's mean sweet food intake was categorized as <20%, 20-30%, and >30%. After multiple imputation of missing values, a marginal structural model with stabilized inverse probability weights was used to estimate the direct effect of mothers' education level on children's dental decay experience. From 2008 to 2012, complete data on 1720 mother-child dyads were available. Dental caries experience for children was 42.3% over the 5-year period. The controlled direct effect estimates of mother's education on child dental caries were 1.21 (95% CI: 1.01-1.45), 1.03 (95% CI: 0.91-1.18) and 1.07 (95% CI: 0.93-1.22); after multiple imputation of missing values, the effects were 1.21 (95% CI: 1.05-1.39), 1.06 (95% CI: 0.94-1.19) and 1.06 (95% CI: 0.95-1.19), comparing '0-9', '10' and '11-12' years to > 12 years of education. Mothers' education level had a direct effect on children's dental decay experience that was not mediated by sweet food intake and other risk factors when estimated using a marginal structural model. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staehle, R.W.; Agrawal, A.K.
1978-01-01
The straining electrode technique was used to evaluate the stress corrosion cracking (SCC) susceptibility of AISI 304 stainless steel in 20N NaOH solution, and of Inconel 600 Alloy and Incoloy 800 Alloy in boiling 17.5N NaOH solution. The crack propagation rate estimated from the straining experiments correlated well with the previous constant load experiments. It was found that the straining electrode technique is a useful method for estimating, through short term experiments, parameters like crack propagation rate, crack morphology, and repassivation rate, as a function of the electrode potential. The role of alloying elements on the crack propagation rate inmore » the above alloys are also discussed.« less
Nanohertz gravitational wave searches with interferometric pulsar timing experiments.
Tinto, Massimo
2011-05-13
We estimate the sensitivity to nano-Hertz gravitational waves of pulsar timing experiments in which two highly stable millisecond pulsars are tracked simultaneously with two neighboring radio telescopes that are referenced to the same timekeeping subsystem (i.e., "the clock"). By taking the difference of the two time-of-arrival residual data streams we can exactly cancel the clock noise in the combined data set, thereby enhancing the sensitivity to gravitational waves. We estimate that, in the band (10(-9)-10(-8)) Hz, this "interferometric" pulsar timing technique can potentially improve the sensitivity to gravitational radiation by almost 2 orders of magnitude over that of single-telescopes. Interferometric pulsar timing experiments could be performed with neighboring pairs of antennas of the NASA's Deep Space Network and the forthcoming large arraying projects.
NASA Technical Reports Server (NTRS)
Banerjee, S. K.
1974-01-01
The direction and magnitude of natural remanent magnetization of five approximately 3-g subsamples of 72275 and 72255 and the high field saturation magnetization, coercive force, and isothermal remanent magnetization of 100-mg chip from each of these samples, were studied. Given an understanding of the magnetization processes, group 1 experiments provide information about the absolute direction of the ancient magnetizing field and a qualitative estimate of its size (paleointensity). The group 2 experiments yield a quantitative estimate of the iron content and a qualitative ideal of the grain sizes.
NASA Technical Reports Server (NTRS)
Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.
2009-01-01
This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.
Studying the P c ( 4450 ) resonance in J / ψ photoproduction off protons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blin, A. N. Hiller; Fernandez-Ramirez, C.; Jackura, A.
2016-08-01
In this study, a resonance-like structure, the P c(4450), has recently been observed in the J/ψ p spectrum by the LHCb collaboration. We discuss the feasibility of detecting this structure in J/ψ photoproduction in the CLAS12 experiment at JLab. We present a first estimate of the upper limit for the branching ratio of the P c(4450) to J/ψ p. Our estimates, which take into account the experimental resolution effects, lead to a sizable cross section close to the J/ψ production threshold, which makes future experiments covering this region very promising.
Consumption and diffusion of dissolved oxygen in sedimentary rocks.
Manaka, M; Takeda, M
2016-10-01
Fe(II)-bearing minerals (e.g., biotite, chlorite, and pyrite) are a promising reducing agent for the consumption of atmospheric oxygen in repositories for the geological disposal of high-level radioactive waste. To estimate effective diffusion coefficients (D e , in m 2 s -1 ) for dissolved oxygen (DO) and the reaction rates for the oxidation of Fe(II)-bearing minerals in a repository environment, we conducted diffusion-chemical reaction experiments using intact rock samples of Mizunami sedimentary rock. In addition, we conducted batch experiments on the oxidation of crushed sedimentary rock by DO in a closed system. From the results of the diffusion-chemical reaction experiments, we estimated the values of D e for DO to lie within the range 2.69×10 -11
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Estimating the size of an open population using sparse capture-recapture data.
Huggins, Richard; Stoklosa, Jakub; Roach, Cameron; Yip, Paul
2018-03-01
Sparse capture-recapture data from open populations are difficult to analyze using currently available frequentist statistical methods. However, in closed capture-recapture experiments, the Chao sparse estimator (Chao, 1989, Biometrics 45, 427-438) may be used to estimate population sizes when there are few recaptures. Here, we extend the Chao (1989) closed population size estimator to the open population setting by using linear regression and extrapolation techniques. We conduct a small simulation study and apply the models to several sparse capture-recapture data sets. © 2017, The International Biometric Society.
Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site
NASA Astrophysics Data System (ADS)
Wang, Yu; Aladejare, Adeyemi Emman
2016-09-01
Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.
NASA Astrophysics Data System (ADS)
Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.
2017-03-01
We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexafluoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefficient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM fields. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.
NASA Technical Reports Server (NTRS)
Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.
2017-01-01
We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexauoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefcient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM elds. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
NASA Astrophysics Data System (ADS)
Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.
2012-06-01
In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).
Jones, R. E.; Ward, D. K.
2016-07-18
Here, given the unique optical properties of LiF, it is often used as an observation window in high-temperature and -pressure experiments; hence, estimates of its transmission properties are necessary to interpret observations. Since direct measurements of the thermal conductivity of LiF at the appropriate conditions are difficult, we resort to molecular simulation methods. Using an empirical potential validated against ab initio phonon density of states, we estimate the thermal conductivity of LiF at high temperatures (1000–4000 K) and pressures (100–400 GPa) with the Green-Kubo method. We also compare these estimates to those derived directly from ab initio data. To ascertainmore » the correct phase of LiF at these extreme conditions, we calculate the (relative) phase stability of the B1 and B2 structures using a quasiharmonic ab initio model of the free energy. We also estimate the thermal conductivity of LiF in an uniaxial loading state that emulates initial stages of compression in high-stress ramp loading experiments and show the degree of anisotropy induced in the conductivity due to deformation.« less
Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)
NASA Astrophysics Data System (ADS)
Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.
2017-12-01
The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.
On experimental damage localization by SP2E: Application of H∞ estimation and oblique projections
NASA Astrophysics Data System (ADS)
Lenzen, Armin; Vollmering, Max
2018-05-01
In this article experimental damage localization based on H∞ estimation and state projection estimation error (SP2E) is studied. Based on an introduced difference process, a state space representation is derived for advantageous numerical solvability. Because real structural excitations are presumed to be unknown, a general input is applied therein, which allows synchronization and normalization. Furthermore, state projections are introduced to enhance damage identification. While first experiments to verify method SP2E have already been conducted and published, further laboratory results are analyzed here. Therefore, SP2E is used to experimentally localize stiffness degradations and mass alterations. Furthermore, the influence of projection techniques is analyzed. In summary, method SP2E is able to localize structural alterations, which has been observed by results of laboratory experiments.
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1981-01-01
The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.
Estimation of Total Tree Height from Renewable Resources Evaluation Data
Charles E. Thomas
1981-01-01
Many ecological, biological, and genetic studies use the measurement of total tree height. Until recently, the Southern Forest Experiment Station's inventory procedures through Renewable Resources Evaluation (RRE) have not included total height measurements. This note provides equations to estimate total height based on other RRE measurements.
Considerations in cross-validation type density smoothing with a look at some data
NASA Technical Reports Server (NTRS)
Schuster, E. F.
1982-01-01
Experience gained in applying nonparametric maximum likelihood techniques of density estimation to judge the comparative quality of various estimators is reported. Two invariate data sets of one hundered samples (one Cauchy, one natural normal) are considered as well as studies in the multivariate case.
Sweep Width Estimation for Ground Search and Rescue
2004-12-30
Develop data compatible with search planning and POD estimation methods that are de- signed to use sweep width data. An experimental...important for Park Rangers and man- trackers . Search experience was expected to be a significant correction factor. However, the re- sults indicate...41 4.1.1 Signing In
Overconfidence in Interval Estimates: What Does Expertise Buy You?
ERIC Educational Resources Information Center
McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan
2008-01-01
People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…
Consumer Education Resource Guide, K-12. A Multi-Disciplinary Approach.
ERIC Educational Resources Information Center
Calhoun, Calfrey C.; And Others
The guide suggests methods and resources for planning learning experiences in teaching consumer education to students at the K-12 levels. The major topics and related areas are: (1) financial planning (estimating income, estimating expenses, establishing goals, making decisions, and making the financial plan); (2) buying (importance of planned…
Metrics and Mappings: A Framework for Understanding Real-World Quantitative Estimation.
ERIC Educational Resources Information Center
Brown, Norman R.; Siegler, Robert S.
1993-01-01
A metrics and mapping framework is proposed to account for how heuristics, domain-specific reasoning, and intuitive statistical induction processes are integrated to generate estimates. Results of 4 experiments involving 188 undergraduates illustrate framework usefulness and suggest when people use heuristics and when they emphasize…
Earth-Moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1979-01-01
The following topics are discussed: (1) the Unified Model of Lunar Translation/Rotation (UMLTR); (2) the effect of figure-figure interactions on lunar physical librations; (3) the effect of translational-rotational coupling on the lunar orbit; and(4) an error analysis for estimating lunar inertias from LURE (Lunar Laser Ranging Experiment) data.
Statistical Estimation of Some Irrational Numbers Using an Extension of Buffon's Needle Experiment
ERIC Educational Resources Information Center
Velasco, S.; Roman, F. L.; Gonzalez, A.; White, J. A.
2006-01-01
In the nineteenth century many people tried to seek a value for the most famous irrational number, [pi], by means of an experiment known as Buffon's needle, consisting of throwing randomly a needle onto a surface ruled with straight parallel lines. Here we propose to extend this experiment in order to evaluate other irrational numbers, such as…
How Generalizable Is Your Experiment? An Index for Comparing Experimental Samples and Populations
ERIC Educational Resources Information Center
Tipton, Elizabeth
2014-01-01
Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference population of interest. In this article, we provide a generalizability index that can be used to assess the degree of similarity between the sample of…
Using Propensity Score Matching Methods to Improve Generalization from Randomized Experiments
ERIC Educational Resources Information Center
Tipton, Elizabeth
2011-01-01
The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…
ERIC Educational Resources Information Center
Nitta, Yasunori; And Others
1984-01-01
Describes a set of experiments (for senior-level biochemistry students) which permit evaluation and estimation of rate and equilibrium constants involving an intermediate in the alpha-chymotrypsin mediated hydrolysis of ortho-hydroxy-alpha-toluenesulfonic acid (I). The only equipment required for the experiments is a well-thermostated double beam…
Japan - USSR joint emulsion chamber experiment at Pamir
NASA Technical Reports Server (NTRS)
1985-01-01
The results are presented for the systematic measurement of cosmic ray showers in the first carbon chamber of Japan-USSR joint experiment at Pamir Plateau. The intensity and the energy distribution of electromagnetic particles, of hadrons and of families are in good agreement with the results of other mountain experiment if the relative error in energy estimation is taken into consideration.
Spatial Representations in Older Adults are Not Modified by Action: Evidence from Tool Use
Costello, Matthew C.; Bloesch, Emily K.; Davoli, Christopher C.; Panting, Nicholas D.; Abrams, Richard A.; Brockmole, James R.
2015-01-01
Theories of embodied perception hold that the visual system is calibrated by both the body schema and the action system, allowing for adaptive action-perception responses. One example of embodied perception involves the effects of tool-use on distance perception, in which wielding a tool with the intention to act upon a target appears to bring that object closer. This tool-based spatial compression (i.e., tool-use effect) has been studied exclusively with younger adults, but it is unknown whether the phenomenon exists with older adults. In this study, we examined the effects of tool use on distance perception in younger and older adults in two experiments. In Experiment 1, younger and older adults estimated the distances of targets just beyond peripersonal space while either wielding a tool or pointing with the hand. Younger adults, but not older adults, estimated targets to be closer after reaching with a tool. In Experiment 2, younger and older adults estimated the distance to remote targets while using either a baton or laser pointer. Younger adults displayed spatial compression with the laser pointer compared to the baton, although older adults did not. Taken together, these findings indicate a generalized absence of the tool-use effect in older adults during distance estimation suggesting that the visuomotor system of older adults does not remap from peripersonal to extrapersonal spatial representations during tool use. PMID:26052886
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Krnjacki, Lauren; Emerson, Eric; Llewellyn, Gwynnyth; Kavanagh, Anne M
2016-02-01
There are no population-based estimates of the prevalence of interpersonal violence among people with disabilities in Australia. The project aimed to: 1) estimate the prevalence of violence for men and women according to disability status; 2) compare the risk of violence among women and men with disabilities to their same-sex non-disabled counterparts and; 3) compare the risk of violence between women and men with disabilities. We analysed the 2012 Australian Bureau of Statistics Survey on Personal Safety of more than 17,000 adults and estimated the population-weighted prevalence of violence (physical, sexual and intimate partner violence and stalking/harassment) in the past 12 months and since the age of 15. Population-weighted, age-adjusted, logistic regression was used to estimate the odds of violence by disability status and gender. People with disabilities were significantly more likely to experience all types of violence, both in the past 12 months and since the age of 15. Women with disabilities were more likely to experience sexual and partner violence and men were more likely to experience physical violence. These results underscore the need to understand risk factors for violence, raise awareness about violence and to target policies and services to reduce violence against people with disabilities in Australia. © 2015 Public Health Association of Australia.
Tsukahara, Atsushi; Hasegawa, Yasuhisa; Eguchi, Kiyoshi; Sankai, Yoshiyuki
2015-03-01
This paper proposes a novel gait intention estimator for an exoskeleton-wearer who needs gait support owing to walking impairment. The gait intention estimator not only detects the intention related to the start of the swing leg based on the behavior of the center of ground reaction force (CoGRF), but also infers the swing speed depending on the walking velocity. The preliminary experiments categorized into two stages were performed on a mannequin equipped with the exoskeleton robot [Hybrid Assistive Limb: (HAL)] including the proposed estimator. The first experiment verified that the gait support system allowed the mannequin to walk properly and safely. In the second experiment, we confirmed the differences in gait characteristics attributed to the presence or absence of the proposed swing speed profile. As a feasibility study, we evaluated the walking capability of a severe spinal cord injury patient supported by the system during a 10-m walk test. The results showed that the system enabled the patient to accomplish a symmetrical walk from both spatial and temporal standpoints while adjusting the speed of the swing leg. Furthermore, the critical differences of gait between our system and a knee-ankle-foot orthosis were obtained from the CoGRF distribution and the walking time. Through the tests, we demonstrated the effectiveness and practical feasibility of the gait support algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Design of microarray experiments for genetical genomics studies.
Bueno Filho, Júlio S S; Gilmour, Steven G; Rosa, Guilherme J M
2006-10-01
Microarray experiments have been used recently in genetical genomics studies, as an additional tool to understand the genetic mechanisms governing variation in complex traits, such as for estimating heritabilities of mRNA transcript abundances, for mapping expression quantitative trait loci, and for inferring regulatory networks controlling gene expression. Several articles on the design of microarray experiments discuss situations in which treatment effects are assumed fixed and without any structure. In the case of two-color microarray platforms, several authors have studied reference and circular designs. Here, we discuss the optimal design of microarray experiments whose goals refer to specific genetic questions. Some examples are used to illustrate the choice of a design for comparing fixed, structured treatments, such as genotypic groups. Experiments targeting single genes or chromosomic regions (such as with transgene research) or multiple epistatic loci (such as within a selective phenotyping context) are discussed. In addition, microarray experiments in which treatments refer to families or to subjects (within family structures or complex pedigrees) are presented. In these cases treatments are more appropriately considered to be random effects, with specific covariance structures, in which the genetic goals relate to the estimation of genetic variances and the heritability of transcriptional abundances.
Modeling the Impact of Control on the Attractiveness of Risk in a Prospect Theory Framework
Young, Diana L.; Goodie, Adam S.; Hall, Daniel B.
2010-01-01
Many decisions involve a degree of personal control over event outcomes, which is exerted through one’s knowledge or skill. In three experiments we investigated differences in decision making between prospects based on a) the outcome of random events and b) the outcome of events characterized by control. In Experiment 1, participants estimated certainty equivalents (CEs) for bets based on either random events or the correctness of their answers to U.S. state population questions across the probability spectrum. In Experiment 2, participants estimated CEs for bets based on random events, answers to U.S. state population questions, or answers to questions about 2007 NCAA football game results. Experiment 3 extended the same procedure as Experiment 1 using a within-subjects design. We modeled data from all experiments in a prospect theory framework to establish psychological mechanisms underlying decision behavior. Participants weighted the probabilities associated with bets characterized by control so as to reflect greater risk attractiveness relative to bets based on random events, as evidenced by more elevated weighting functions under conditions of control. This research elucidates possible cognitive mechanisms behind increased risk taking for decisions characterized by control, and implications for various literatures are discussed. PMID:21278906
Modeling the Impact of Control on the Attractiveness of Risk in a Prospect Theory Framework.
Young, Diana L; Goodie, Adam S; Hall, Daniel B
2011-01-01
Many decisions involve a degree of personal control over event outcomes, which is exerted through one's knowledge or skill. In three experiments we investigated differences in decision making between prospects based on a) the outcome of random events and b) the outcome of events characterized by control. In Experiment 1, participants estimated certainty equivalents (CEs) for bets based on either random events or the correctness of their answers to U.S. state population questions across the probability spectrum. In Experiment 2, participants estimated CEs for bets based on random events, answers to U.S. state population questions, or answers to questions about 2007 NCAA football game results. Experiment 3 extended the same procedure as Experiment 1 using a within-subjects design. We modeled data from all experiments in a prospect theory framework to establish psychological mechanisms underlying decision behavior. Participants weighted the probabilities associated with bets characterized by control so as to reflect greater risk attractiveness relative to bets based on random events, as evidenced by more elevated weighting functions under conditions of control. This research elucidates possible cognitive mechanisms behind increased risk taking for decisions characterized by control, and implications for various literatures are discussed.
A Nonlinear, Multiinput, Multioutput Process Control Laboratory Experiment
ERIC Educational Resources Information Center
Young, Brent R.; van der Lee, James H.; Svrcek, William Y.
2006-01-01
Experience in using a user-friendly software, Mathcad, in the undergraduate chemical reaction engineering course is discussed. Example problems considered for illustration deal with simultaneous solution of linear algebraic equations (kinetic parameter estimation), nonlinear algebraic equations (equilibrium calculations for multiple reactions and…
Assessing the chances of success: naïve statistics versus kind experience.
Hogarth, Robin M; Mukherjee, Kanchan; Soyer, Emre
2013-01-01
Additive integration of information is ubiquitous in judgment and has been shown to be effective even when multiplicative rules of probability theory are prescribed. We explore the generality of these findings in the context of estimating probabilities of success in contests. We first define a normative model of these probabilities that takes account of relative skill levels in contests where only a limited number of entrants can win. We then report 4 experiments using a scenario about a competition. Experiments 1 and 2 both elicited judgments of probabilities, and, although participants' responses demonstrated considerable variability, their mean judgments provide a good fit to a simple linear model. Experiment 3 explored choices. Most participants entered most contests and showed little awareness of appropriate probabilities. Experiment 4 investigated effects of providing aids to calculate probabilities, specifically, access to expert advice and 2 simulation tools. With these aids, estimates were accurate and decisions varied appropriately with economic consequences. We discuss implications by considering when additive decision rules are dysfunctional, the interpretation of overconfidence based on contest-entry behavior, and the use of aids to help people make better decisions.
Participatory health system priority setting: Evidence from a budget experiment.
Costa-Font, Joan; Forns, Joan Rovira; Sato, Azusa
2015-12-01
Budget experiments can provide additional guidance to health system reform requiring the identification of a subset of programs and services that accrue the highest social value to 'communities'. Such experiments simulate a realistic budget resource allocation assessment among competitive programs, and position citizens as decision makers responsible for making 'collective sacrifices'. This paper explores the use of a participatory budget experiment (with 88 participants clustered in social groups) to model public health care reform, drawing from a set of realistic scenarios for potential health care users. We measure preferences by employing a contingent ranking alongside a budget allocation exercise (termed 'willingness to assign') before and after program cost information is revealed. Evidence suggests that the budget experiment method tested is cognitively feasible and incentive compatible. The main downside is the existence of ex-ante "cost estimation" bias. Additionally, we find that participants appeared to underestimate the net social gain of redistributive programs. Relative social value estimates can serve as a guide to aid priority setting at a health system level. Copyright © 2015 Elsevier Ltd. All rights reserved.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Large capacity temporary visual memory
Endress, Ansgar D.; Potter, Mary C.
2014-01-01
Visual working memory (WM) capacity is thought to be limited to three or four items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor — proactive interference — is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation (RSVP) of 5 to 21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory, but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited as in WM experiments, or has the much larger capacity found in the present experiments. PMID:23937181
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Estimation of base temperatures for nine weed species.
Steinmaus, S J; Prather, T S; Holt, J S
2000-02-01
Experiments were conducted to test several methods for estimating low temperature thresholds for seed germination. Temperature responses of nine weeds common in annual agroecosystems were assessed in temperature gradient experiments. Species included summer annuals (Amaranthus albus, A. palmeri, Digitaria sanguinalis, Echinochloa crus-galli, Portulaca oleracea, and Setaria glauca), winter annuals (Hirschfeldia incana and Sonchus oleraceus), and Conyza canadensis, which is classified as a summer or winter annual. The temperature below which development ceases (Tbase) was estimated as the x-intercept of four conventional germination rate indices regressed on temperature, by repeated probit analysis, and by a mathematical approach. An overall Tbase estimate for each species was the average across indices weighted by the reciprocal of the variance associated with the estimate. Germination rates increased linearly with temperature between 15 degrees C and 30 degrees C for all species. Consistent estimates of Tbase were obtained for most species using several indices. The most statistically robust and biologically relevant method was the reciprocal time to median germination, which can also be used to estimate other biologically meaningful parameters. The mean Tbase for summer annuals (13.8 degrees C) was higher than that for winter annuals (8.3 degrees C). The two germination response characteristics, Tbase and slope (rate), influence a species' germination behaviour in the field since the germination inhibiting effects of a high Tbase may be offset by the germination promoting effects of a rapid germination response to temperature. Estimates of Tbase may be incorporated into predictive thermal time models to assist weed control practitioners in making management decisions.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M
2017-03-01
Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.
How social information can improve estimation accuracy in human groups.
Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy
2017-11-21
In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.
How social information can improve estimation accuracy in human groups
Jayles, Bertrand; Kim, Hye-rin; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy
2017-01-01
In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. PMID:29118142
NASA Astrophysics Data System (ADS)
Ma, Manyou; Rohling, Robert; Lampe, Lutz
2017-03-01
Synthetic transmit aperture beamforming is an increasingly used method to improve resolution in biomedical ultrasound imaging. Synthetic aperture sequential beamforming (SASB) is an implementation of this concept which features a relatively low computation complexity. Moreover, it can be implemented in a dual-stage architecture, where the first stage only applies simple single receive-focused delay-and-sum (srDAS) operations, while the second, more complex stage is performed either locally or remotely using more powerful processing. However, like traditional DAS-based beamforming methods, SASB is susceptible to inaccurate speed-of-sound (SOS) information. In this paper, we show how SOS estimation can be implemented using the srDAS beamformed image, and integrated into the dual-stage implementation of SASB, in an effort to obtain high resolution images with relatively low-cost hardware. Our approach builds on an existing per-channel radio frequency data-based direct estimation method, and applies an iterative refinement of the estimate. We use this estimate for SOS compensation, without the need to repeat the first stage beamforming. The proposed and previous methods are tested on both simulation and experimental studies. The accuracy of our SOS estimation method is on average 0.38% in simulation studies and 0.55% in phantom experiments, when the underlying SOS in the media is within the range 1450-1620 m/s. Using the estimated SOS, the beamforming lateral resolution of SASB is improved on average 52.6% in simulation studies and 50.0% in phantom experiments.
Resolving Number Ambiguities during Language Comprehension
ERIC Educational Resources Information Center
Bader, Markus; Haussler, Jana
2009-01-01
This paper investigates how readers process number ambiguous noun phrases in subject position. A speeded-grammaticality judgment experiment and two self-paced reading experiments were conducted involving number ambiguous subjects in German verb-end clauses. Number preferences for individual nouns were estimated by means of two questionnaire…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.
2016-05-15
A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less
Pazos, Valérie; Mongrain, Rosaire; Tardif, Jean-Claude
2010-06-01
Clinical studies on lipid-lowering therapy have shown that changing the composition of lipid pools reduced significantly the risk of cardiac events associated with plaque rupture. It has been shown also that changing the composition of the lipid pool affects its mechanical properties. However, knowledge about the mechanical properties of human atherosclerotic lesions remains limited due to the difficulty of the experiments. This paper aims to assess the feasibility of characterizing a lipid pool embedded in the wall of a pressurized vessel using finite-element simulations and an optimization algorithm. Finite-element simulations of inflation experiments were used together with nonlinear least squares algorithm to estimate the material model parameters of the wall and of the inclusion. An optimal fit of the simulated experiment and the real experiment was sought with the parameter estimation algorithm. The method was first tested on a single-layer polyvinyl alcohol (PVA) cryogel stenotic vessel, and then, applied on a double-layered PVA cryogel stenotic vessel with a lipid inclusion.
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Anderson, R.; Bumberger, J.; Dietrich, P.; Harpole, W. S.; Maas, H.-G.
2018-05-01
Grassland ecology experiments in remote locations requiring quantitative analysis of the biomass in defined plots are becoming increasingly widespread, but are still limited by manual sampling methodologies. To provide a cost-effective automated solution for biomass determination, several photogrammetric techniques are examined to generate 3D point cloud representations of plots as a basis, to estimate aboveground biomass on grassland plots, which is a key ecosystem variable used in many experiments. Methods investigated include Structure from Motion (SfM) techniques for camera pose estimation with posterior dense matching as well as the usage of a Time of Flight (TOF) 3D camera, a laser light sheet triangulation system and a coded light projection system. In this context, plants of small scales (herbage) and medium scales are observed. In the first pilot study presented here, the best results are obtained by applying dense matching after SfM, ideal for integration into distributed experiment networks.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
Dynamo threshold detection in the von Kármán sodium experiment.
Miralles, Sophie; Bonnefoy, Nicolas; Bourgoin, Mickael; Odier, Philippe; Pinton, Jean-François; Plihon, Nicolas; Verhille, Gautier; Boisson, Jean; Daviaud, François; Dubrulle, Bérengère
2013-07-01
Predicting dynamo self-generation in liquid metal experiments has been an ongoing question for many years. In contrast to simple dynamical systems for which reliable techniques have been developed, the ability to predict the dynamo capacity of a flow and the estimate of the corresponding critical value of the magnetic Reynolds number (the control parameter of the instability) has been elusive, partly due to the high level of turbulent fluctuations of flows in such experiments (with kinetic Reynolds numbers in excess of 10(6)). We address these issues here, using the von Kármán sodium experiment and studying its response to an externally applied magnetic field. We first show that a dynamo threshold can be estimated from analysis related to critical slowing down and susceptibility divergence, in configurations for which dynamo action is indeed observed. These approaches are then applied to flow configurations that have failed to self-generate magnetic fields within operational limits, and we quantify the dynamo capacity of these configurations.
Sex differences in the association between countries' smoking prevalence and happiness ratings.
Drehmer, J E
2018-05-02
To examine the cross-sectional relationship between measures of countries' happiness and countries' prevalence of tobacco smoking. Since smoking prevalence differs widely based on sex in some countries and is similar in other countries, it was examined if there was a sex difference in the relationship between smoking prevalence and country-specific happiness ratings. Ecological study design. Countries' age-standardized prevalence estimates of smoking any tobacco product among persons aged 15 years and older (%) for 2015 were obtained from the World Health Organization (WHO) Global Health Observatory. Country-specific scores from the World Happiness Report 2016 Update Ranking of Happiness (2013-15) and the 2015 Gallup Positive Experience Index were matched and correlated to 2015 WHO estimates of tobacco smoking prevalence for males and females. The difference between male and female age-standardized smoking prevalence estimates in each country was calculated by subtracting female prevalence from male prevalence and was then correlated to countries' World Happiness Report scores. The analyses did not control for potential confounders. The association between male age-standardized smoking prevalence estimates and countries' World Happiness Report scores was inversely correlated [r(104) = -0.22, P = 0.03], whereas the association between female age-standardized smoking prevalence estimates and countries' World Happiness Report scores was positively correlated [r(104) = 0.48, P = 0.00]. An inverse correlation was found between the difference in male and female smoking prevalence estimates and countries' World Happiness Report scores [r(104) = -0.50, P = 0.00]. The association between countries' male age-standardized smoking prevalence estimates and the Positive Experience Index scores was inversely correlated [r(99) = -0.37, P = 0.00], whereas the female age-standardized smoking prevalence estimates in countries were not significantly associated with Positive Experience Index scores [r(99) = -0.03, P = 0.75]. There are distinct sex differences between the amounts of happiness measured in countries and male and female smoking rates. Greater inequality in age-standardized smoking prevalence estimates between males and females is associated with lower amounts of happiness as measured by the World Happiness Report. These findings can be applied to population-based strategies aimed at reducing national smoking rates in men and women. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Optimizing hidden layer node number of BP network to estimate fetal weight
NASA Astrophysics Data System (ADS)
Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao
2007-12-01
The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.
Numerosity underestimation with item similarity in dynamic visual display.
Au, Ricky K C; Watanabe, Katsumi
2013-01-01
The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.
Lacie phase 1 Classification and Mensuration Subsystem (CAMS) rework experiment
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Hsu, E. M.; Liszcz, C. J.
1976-01-01
An experiment was designed to test the ability of the Classification and Mensuration Subsystem rework operations to improve wheat proportion estimates for segments that had been processed previously. Sites selected for the experiment included three in Kansas and three in Texas, with the remaining five distributed in Montana and North and South Dakota. The acquisition dates were selected to be representative of imagery available in actual operations. No more than one acquisition per biophase were used, and biophases were determined by actual crop calendars. All sites were worked by each of four Analyst-Interpreter/Data Processing Analyst Teams who reviewed the initial processing of each segment and accepted or reworked it for an estimate of the proportion of small grains in the segment. Classification results, acquisitions and classification errors and performance results between CAMS regular and ITS rework are tabulated.
Results of the Magnetometer Navigation (MAGNAV)lnflight Experiment
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Harman, Richard R.; Bar-Itzhack, Itzhack Y.; Lambertson, Mike
2004-01-01
The Magnetometer Navigation (MAGNAV) algorithm is currently running as a flight experiment as part of the Wide Field Infrared Explorer (WIRE) Post-Science Engineering Testbed. Initialization of MAGNAV occurred on September 4, 2003. MAGNAV is designed to autonomously estimate the spacecraft orbit, attitude, and rate using magnetometer and sun sensor data. Since the Earth's magnetic field is a function of time and position, and since time is known quite precisely, the differences between the computed magnetic field and measured magnetic field components, as measured by the magnetometer throughout the entire spacecraft orbit, are a function of the spacecraft trajectory and attitude errors. Therefore, these errors are used to estimate both trajectory and attitude. In addition, the time rate of change of the magnetic field vector is used to estimate the spacecraft rotation rate. The estimation of the attitude and trajectory is augmented with the rate estimation into an Extended Kalman filter blended with a pseudo-linear Kalman filter. Sun sensor data is also used to improve the accuracy and observability of the attitude and rate estimates. This test serves to validate MAGNAV as a single low cost navigation system which utilizes reliable, flight qualified sensors. MAGNAV is intended as a backup algorithm, an initialization algorithm, or possibly a prime navigation algorithm for a mission with coarse requirements. Results from the first six months of operation are presented.
Application of Multilayer Feedforward Neural Networks to Precipitation Cell-Top Altitude Estimation
NASA Technical Reports Server (NTRS)
Spina, Michelle S.; Schwartz, Michael J.; Staelin, David H.; Gasiewski, Albin J.
1998-01-01
The use of passive 118-GHz O2 observations of rain cells for precipitation cell-top altitude estimation is demonstrated by using a multilayer feed forward neural network retrieval system. Rain cell observations at 118 GHz were compared with estimates of the cell-top altitude obtained by optical stereoscopy. The observations were made with 2 4 km horizontal spatial resolution by using the Millimeter-wave Temperature Sounder (MTS) scanning spectrometer aboard the NASA ER-2 research aircraft during the Genesis of Atlantic Lows Experiment (GALE) and the COoperative Huntsville Meteorological EXperiment (COHMEX) in 1986. The neural network estimator applied to MTS spectral differences between clouds, and nearby clear air yielded an rms discrepancy of 1.76 km for a combined cumulus, mature, and dissipating cell set and 1.44 km for the cumulus-only set. An improvement in rms discrepancy to 1.36 km was achieved by including additional MTS information on the absolute atmospheric temperature profile. An incremental method for training neural networks was developed that yielded robust results, despite the use of as few as 56 training spectra. Comparison of these results with a nonlinear statistical estimator shows that superior results can be obtained with a neural network retrieval system. Imagery of estimated cell-top altitudes was created from 118-GHz spectral imagery gathered from CAMEX, September through October 1993, and from cyclone Oliver, February 7, 1993.
Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G
2015-08-01
Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV.
EPEPT: A web service for enhanced P-value estimation in permutation tests
2011-01-01
Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting) P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values [1]. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/ PMID:22024252
Three-dimensional ultrasound strain imaging of skeletal muscles
NASA Astrophysics Data System (ADS)
Gijsbertse, K.; Sprengers, A. M. J.; Nillesen, M. M.; Hansen, H. H. G.; Lopata, R. G. P.; Verdonschot, N.; de Korte, C. L.
2017-01-01
In this study, a multi-dimensional strain estimation method is presented to assess local relative deformation in three orthogonal directions in 3D space of skeletal muscles during voluntary contractions. A rigid translation and compressive deformation of a block phantom, that mimics muscle contraction, is used as experimental validation of the 3D technique and to compare its performance with respect to a 2D based technique. Axial, lateral and (in case of 3D) elevational displacements are estimated using a cross-correlation based displacement estimation algorithm. After transformation of the displacements to a Cartesian coordinate system, strain is derived using a least-squares strain estimator. The performance of both methods is compared by calculating the root-mean-squared error of the estimated displacements with the calculated theoretical displacements of the phantom experiments. We observe that the 3D technique delivers more accurate displacement estimations compared to the 2D technique, especially in the translation experiment where out-of-plane motion hampers the 2D technique. In vivo application of the 3D technique in the musculus vastus intermedius shows good resemblance between measured strain and the force pattern. Similarity of the strain curves of repetitive measurements indicates the reproducibility of voluntary contractions. These results indicate that 3D ultrasound is a valuable imaging tool to quantify complex tissue motion, especially when there is motion in three directions, which results in out-of-plane errors for 2D techniques.
ERIC Educational Resources Information Center
Riley, Erin; Felse, P. Arthur
2017-01-01
Centrifugation is a major unit operation in chemical and biotechnology industries. Here we present a simple, hands-on laboratory experiment to teach the basic principles of centrifugation and to explore the shear effects of centrifugation using bacterial cells as model particles. This experiment provides training in the use of a bench-top…
Several key issues on using 137Cs method for soil erosion estimation
USDA-ARS?s Scientific Manuscript database
This work was to examine several key issues of using the cesium-137 method to estimate soil erosion rates in order to improve and standardize the method. Based on the comprehensive review and synthesis of a large body of published literature and the author’s extensive research experience, several k...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-18
...) of 1995 (44 U.S.C. 3501-21), this notice announces that the Veterans Benefits Administration (VBA... schools below college level. The information is used to assure that participants have equal access to.... Estimated Annual Burden and Average Burden per Respondent: Based on past experience, VBA estimates that 76...
The grain drain. Ozone effects on historical maize and soybean yields
USDA-ARS?s Scientific Manuscript database
Numerous controlled experiments find that elevated ground-level ozone concentrations ([O3]) damage crops and reduce yield. There have been no estimates of the actual field yield losses in the USA from [O3], even though such estimates would be valuable for projections of future food production and fo...
Valuing Informal Care Experience: Does Choice of Measure Matter?
ERIC Educational Resources Information Center
Mentzakis, Emmanouil; McNamee, Paul; Ryan, Mandy; Sutton, Matthew
2012-01-01
Well-being equations are often estimated to generate monetary values for non-marketed activities. In such studies, utility is often approximated by either life satisfaction or General Health Questionnaire scores. We estimate and compare monetary valuations of informal care for the first time in the UK employing both measures, using longitudinal…
Estimating Independent Locally Shifted Random Utility Models for Ranking Data
ERIC Educational Resources Information Center
Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans
2011-01-01
We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…
ERIC Educational Resources Information Center
Yang, Wei
2017-01-01
This paper estimates the impact of "compulsory volunteerism" for adolescents on subsequent volunteer behavior exploiting the introduction of a mandatory community service program for high school (HS) students in Ontario, Canada. We use difference-in-differences approach with a large longitudinal dataset. Our estimates show that the…
ERIC Educational Resources Information Center
Diaz, Juan Jose; Handa, Sudhanshu
2006-01-01
Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…
Estimation of an Occupational Choice Model when Occupations Are Misclassified
ERIC Educational Resources Information Center
Sullivan, Paul
2009-01-01
This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…
ERIC Educational Resources Information Center
Cheek, Kim A.
2017-01-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude…
Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for…
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Forest statistics for Southwest Arkansas counties
Arnold Hedlund; J.M. Earles
1969-01-01
This report tabulates information from a new forest survey of southwest Arkansas, completed in 1969 by the Southern Forest Experiment Station. The tables are intended for use as source data in compiling estimates for groups of counties. Because the Arkansas sampling procedure is intended primarily to furnish inventory data for the State as a whole, estimates for...
USDA-ARS?s Scientific Manuscript database
The aggregate structure of phthalic anhydride (PA) modified soy protein isolate (SPI) was investigated by estimating its fractal dimension from the equilibrated dynamic strain sweep experiments. The estimated fractal dimensions of the filler aggregates were less than 2, indicating that these partic...
SUBTLEX-ESP: Spanish Word Frequencies Based on Film Subtitles
ERIC Educational Resources Information Center
Cuetos, Fernando; Glez-Nosti, Maria; Barbon, Analia; Brysbaert, Marc
2011-01-01
Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken…
Actuarial considerations of medical malpractice evaluations in M&As.
Frese, Richard C
2014-11-01
To best project an actuarial estimate for medical malpractice exposure for a merger and acquisition, a organization's leaders should consider the following factors, among others: How to support an unbiased actuarial estimation. Experience of the actuary. The full picture of the organization's malpractice coverage. The potential for future loss development. Frequency and severity trends.
Atmospheric gradients from very long baseline interferometry observations
NASA Technical Reports Server (NTRS)
Macmillan, D. S.
1995-01-01
Azimuthal asymmetries in the atmospheric refractive index can lead to errors in estimated vertical and horizontal station coordinates. Daily average gradient effects can be as large as 50 mm of delay at a 7 deg elevation. To model gradients, the constrained estimation of gradient paramters was added to the standard VLBI solution procedure. Here the analysis of two sets of data is summarized: the set of all geodetic VLBI experiments from 1990-1993 and a series of 12 state-of-the-art R&D experiments run on consecutive days in January 1994. In both cases, when the gradient parameters are estimated, the overall fit of the geodetic solution is improved at greater than the 99% confidence level. Repeatabilities of baseline lengths ranging up to 11,000 km are improved by 1 to 8 mm in a root-sum-square sense. This varies from about 20% to 40% of the total baseline length scatter without gradient modeling for the 1990-1993 series and 40% to 50% for the January series. Gradients estimated independently for each day as a piecewise linear function are mostly continuous from day to day within their formal uncertainties.
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
Investigating the capabilities of new microwave ALOS-2/PALSAR-2 data for biomass estimation
NASA Astrophysics Data System (ADS)
Anh, L. V.; Paull, D. J.; Griffin, A. L.
2016-10-01
Most studies indicate that L-band synthetic aperture radar (SAR) has a great capacity to estimate biomass due to its ability to penetrate deeply through canopy layers. Many applications using L-band space-borne data have showcased their own significant contribution in biomass estimation but some limitations still exist. New data have been released recently that are designed to overcome limitations and drawbacks of previous sensor generations. The Japan Aerospace Exploration Agency (JAXA) launched the new sensor ALOS-2 to improve wide and high-resolution observation technologies in order to further meet social and environmental objectives. In the list of priority tasks addressed by JAXA there are experiments utilizing these new data for vegetation biomass distribution measurement. This study, therefore, focused on investigating the capabilities of these new microwave data in above ground biomass (AGB) estimation. The data mode used in this study was a full polarimetric ALOS-2/PALSAR-2 (L-band) scene. The experiment was conducted on a portion of a tropical forest in a Central Highland province in Vietnam.
Wedell, Douglas H; Moro, Rodrigo
2008-04-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.
Outflow monitoring of a pneumatic ventricular assist device using external pressure sensors.
Kang, Seong Min; Her, Keun; Choi, Seong Wook
2016-08-25
In this study, a new algorithm was developed for estimating the pump outflow of a pneumatic ventricular assist device (p-VAD). The pump outflow estimation algorithm was derived from the ideal gas equation and determined the change in blood-sac volume of a p-VAD using two external pressure sensors. Based on in vitro experiments, the algorithm was revised to consider the effects of structural compliance caused by volume changes in an implanted unit, an air driveline, and the pressure difference between the sensors and the implanted unit. In animal experiments, p-VADs were connected to the left ventricles and the descending aorta of three calves (70-100 kg). Their outflows were estimated using the new algorithm and compared to the results obtained using an ultrasonic blood flow meter (UBF) (TS-410, Transonic Systems Inc., Ithaca, NY, USA). The estimated and measured values had a Pearson's correlation coefficient of 0.864. The pressure sensors were installed at the external controller and connected to the air driveline on the same side as the external actuator, which made the sensors easy to manage.