A new methodology for estimating nuclear casualties as a function of time.
Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A
2011-09-01
The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.
Methodology for Estimating Total Automotive Manufacturing Costs
DOT National Transportation Integrated Search
1983-04-01
A number of methodologies for estimating manufacturing costs have been developed. This report discusses the different approaches and shows that an approach to estimating manufacturing costs in the automobile industry based on surrogate plants is pref...
Performance-based methodology for assessing seismic vulnerability and capacity of buildings
NASA Astrophysics Data System (ADS)
Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li
2010-06-01
This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.
A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers
Yochum, Steven E.
2000-01-01
The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Ambient Vibration Testing for Story Stiffness Estimation of a Heritage Timber Building
Min, Kyung-Won; Kim, Junhee; Park, Sung-Ah; Park, Chan-Soo
2013-01-01
This paper investigates dynamic characteristics of a historic wooden structure by ambient vibration testing, presenting a novel estimation methodology of story stiffness for the purpose of vibration-based structural health monitoring. As for the ambient vibration testing, measured structural responses are analyzed by two output-only system identification methods (i.e., frequency domain decomposition and stochastic subspace identification) to estimate modal parameters. The proposed methodology of story stiffness is estimation based on an eigenvalue problem derived from a vibratory rigid body model. Using the identified natural frequencies, the eigenvalue problem is efficiently solved and uniquely yields story stiffness. It is noteworthy that application of the proposed methodology is not necessarily confined to the wooden structure exampled in the paper. PMID:24227999
The National Visitor Use Monitoring methodology and final results for round 1
S.J. Zarnoch; E.M. White; D.B.K. English; Susan M. Kocis; Ross Arnold
2011-01-01
A nationwide, systematic monitoring process has been developed to provide improved estimates of recreation visitation on National Forest System lands. Methodology is presented to provide estimates of site visits and national forest visits based on an onsite sampling design of site-days and last-exiting recreationists. Stratification of the site days, based on site type...
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-04-30
Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
Corvalán, Roberto M; Osses, Mauricio; Urrutia, Cristian M
2002-02-01
Depending on the final application, several methodologies for traffic emission estimation have been developed. Emission estimation based on total miles traveled or other average factors is a sufficient approach only for extended areas such as national or worldwide areas. For road emission control and strategies design, microscale analysis based on real-world emission estimations is often required. This involves actual driving behavior and emission factors of the local vehicle fleet under study. This paper reports on a microscale model for hot road emissions and its application to the metropolitan region of the city of Santiago, Chile. The methodology considers the street-by-street hot emission estimation with its temporal and spatial distribution. The input data come from experimental emission factors based on local driving patterns and traffic surveys of traffic flows for different vehicle categories. The methodology developed is able to estimate hourly hot road CO, total unburned hydrocarbons (THCs), particulate matter (PM), and NO(x) emissions for predefined day types and vehicle categories.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Fracture mechanics approach to estimate rail wear limits
DOT National Transportation Integrated Search
2009-10-01
This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...
Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea
2014-05-02
Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.
Methodology for estimating helicopter performance and weights using limited data
NASA Technical Reports Server (NTRS)
Baserga, Claudio; Ingalls, Charles; Lee, Henry; Peyran, Richard
1990-01-01
Methodology is developed and described for estimating the flight performance and weights of a helicopter for which limited data are available. The methodology is based on assumptions which couple knowledge of the technology of the helicopter under study with detailed data from well documented helicopters thought to be of similar technology. The approach, analysis assumptions, technology modeling, and the use of reference helicopter data are discussed. Application of the methodology is illustrated with an investigation of the Agusta A129 Mangusta.
These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-06-14
Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
Economic Effects of Increased Control Zone Sizes in Conflict Resolution
NASA Technical Reports Server (NTRS)
Datta, Koushik
1998-01-01
A methodology for estimating the economic effects of different control zone sizes used in conflict resolutions between aircraft is presented in this paper. The methodology is based on estimating the difference in flight times of aircraft with and without the control zone, and converting the difference into a direct operating cost. Using this methodology the effects of increased lateral and vertical control zone sizes are evaluated.
Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters
NASA Astrophysics Data System (ADS)
Mousas, Christos; Anagnostopoulos, Christos-Nikolaos
2017-09-01
This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.
C Zapico, Sara; Ubelaker, Douglas H
2013-03-01
Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
ERIC Educational Resources Information Center
Gutmanis, Ivars; And Others
The primary purpose of the study was to develop and apply a methodology for estimating the need for scientists and engineers by specialty in energy and energy-related industries. The projections methodology was based on the Case 1 estimates by the National Petroleum Council of the results of "maximum efforts" to develop domestic fuel sources by…
Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy
2014-01-21
To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.
A Hierarchical Clustering Methodology for the Estimation of Toxicity
A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...
Radiological Characterization Methodology of INEEL Stored RH-TRU Waste from ANL-E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajiv N. Bhatt
2003-02-01
An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using this methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuan, P.; Bhatt, R.N.
2003-01-14
An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using the methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holttinen, Hannele; Kiviluoma, Juha; McCann, John
2015-10-05
This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.
Local deformation for soft tissue simulation
Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-01-01
ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482
Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)
NASA Astrophysics Data System (ADS)
Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.
2014-04-01
A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.
A hierarchical clustering methodology for the estimation of toxicity.
Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M
2008-01-01
ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.
Magnitude and variability of land evaporation and its components at the global scale
USDA-ARS?s Scientific Manuscript database
A physics-based methodology is applied to estimate global land-surface evaporation from multi-satellite observations. GLEAM (Global Land-surface Evaporation: the Amsterdam Methodology) combines a wide range of remotely sensed observations within a Priestley and Taylor-based framework. Daily actual e...
Seismic Hazard Estimates Using Ill-defined Macroseismic Data at Site
NASA Astrophysics Data System (ADS)
Albarello, D.; Mucciarelli, M.
- A new approach is proposed to the seismic hazard estimate based on documentary data concerning local history of seismic effects. The adopted methodology allows for the use of ``poor'' data, such as the macroseismic ones, within a formally coherent approach that permits overcoming a number of problems connected to the forcing of available information in the frame of ``standard'' methodologies calibrated on the use of instrumental data. The use of the proposed methodology allows full exploitation of all the available information (that for many towns in Italy covers several centuries) making possible a correct use of macroseismic data characterized by different levels of completeness and reliability. As an application of the proposed methodology, seismic hazard estimates are presented for two towns located in Northern Italy: Bologna and Carpi.
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
Estimating and validating ground-based timber harvesting production through computer simulation
Jingxin Wang; Chris B. LeDoux
2003-01-01
Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...
This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
An approach to software cost estimation
NASA Technical Reports Server (NTRS)
Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.
1984-01-01
A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.
Descriptive epidemiology of cervical dystonia.
Defazio, Giovanni; Jankovic, Joseph; Giel, Jennifer L; Papapetropoulos, Spyridon
2013-01-01
Cervical dystonia (CD), the most common form of adult-onset focal dystonia, has a heterogeneous clinical presentation with variable clinical features, leading to difficulties and delays in diagnosis. Owing to the lack of reviews specifically focusing on the frequency of primary CD in the general population, we performed a systematic literature search to examine its prevalence/incidence and analyze methodological differences among studies. We performed a systematic literature search to examine the prevalence data of primary focal CD. Sixteen articles met our methodological criteria. Because the reported prevalence estimates were found to vary widely across studies, we analyzed methodological differences and other factors to determine whether true differences exist in prevalence rates among geographic areas (and by gender and age distributions), as well as to facilitate recommendations for future studies. Prevalence estimates ranged from 20-4,100 cases/million. Generally, studies that relied on service-based and record-linkage system data likely underestimated the prevalence of CD, whereas population-based studies suffered from over-ascertainment. The more methodologically robust studies yielded a range of estimates of 28-183 cases/million. Despite the varying prevalence estimates, an approximate 2:1 female:male ratio was consistent among many studies. Three studies estimated incidence, ranging from 8-12 cases/million person-years. Although several studies have attempted to estimate the prevalence and incidence of CD, there is a need for additional well-designed epidemiological studies on primary CD that include large populations; use defined CD diagnostic criteria; and stratify for factors such as age, gender, and ethnicity.
Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management
A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...
NASA Technical Reports Server (NTRS)
1974-01-01
A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach methodology including the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption from the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Ricca, Jim; Dwivedi, Vikas; Varallo, John; Singh, Gajendra; Pallipamula, Suranjeen Prasad; Amade, Nazir; de Luz Vaz, Maria; Bishanga, Dustan; Plotkin, Marya; Al-Makaleh, Bushra; Suhowatsky, Stephanie; Smith, Jeffrey Michael
2015-01-22
Postpartum hemorrhage (PPH) is the leading cause of maternal mortality in developing countries. While incidence of PPH can be dramatically reduced by uterotonic use immediately following birth (UUIFB) in both community and facility settings, national coverage estimates are rare. Most national health systems have no indicator to track this, and community-based measurements are even more scarce. To fill this information gap, a methodology for estimating national coverage for UUIFB was developed and piloted in four settings. The rapid estimation methodology consisted of convening a group of national technical experts and using the Delphi method to come to consensus on key data elements that were applied to a simple algorithm, generating a non-precise national estimate of coverage of UUIFB. Data elements needed for the calculation were the distribution of births by location and estimates of UUIFB in each of those settings, adjusted to take account of stockout rates and potency of uterotonics. This exercise was conducted in 2013 in Mozambique, Tanzania, the state of Jharkhand in India, and Yemen. Available data showed that deliveries in public health facilities account for approximately half of births in Mozambique and Tanzania, 16% in Jharkhand and 24% of births in Yemen. Significant proportions of births occur in private facilities in Jharkhand and faith-based facilities in Tanzania. Estimated uterotonic use for facility births ranged from 70 to 100%. Uterotonics are not used routinely for PPH prevention at home births in any of the settings. National UUIFB coverage estimates of all births were 43% in Mozambique, 40% in Tanzania, 44% in Jharkhand, and 14% in Yemen. This methodology for estimating coverage of UUIFB was found to be feasible and acceptable. While the exercise produces imprecise estimates whose validity cannot be assessed objectively in the absence of a gold standard estimate, stakeholders felt they were accurate enough to be actionable. The exercise highlighted information and practice gaps and promoted discussion on ways to improve UUIFB measurement and coverage, particularly of home births. Further follow up is needed to verify actions taken. The methodology produces useful data to help accelerate efforts to reduce maternal mortality.
Methodology for estimating human perception to tremors in high-rise buildings
NASA Astrophysics Data System (ADS)
Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien
2017-07-01
Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.
Estimating Agricultural Nitrous Oxide Emissions
USDA-ARS?s Scientific Manuscript database
Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...
Highway User Benefit Analysis System Research Project #128
DOT National Transportation Integrated Search
2000-10-01
In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...
NASA Astrophysics Data System (ADS)
Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios
2010-01-01
Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Fuzzy logic modeling of high performance rechargeable batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, P.; Fennie, C. Jr.; Reisner, D.E.
1998-07-01
Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.
NASA Astrophysics Data System (ADS)
Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said
2018-06-01
Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-04-01
The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.
NASA Astrophysics Data System (ADS)
Scaini, C.; Felpeto, A.; Martí, J.; Carniel, R.
2014-05-01
This paper presents a GIS-based methodology to estimate damages produced by volcanic eruptions. The methodology is constituted by four parts: definition and simulation of eruptive scenarios, exposure analysis, vulnerability assessment and estimation of expected damages. Multi-hazard eruptive scenarios are defined for the Teide-Pico Viejo active volcanic complex, and simulated through the VORIS tool. The exposure analysis identifies the elements exposed to the hazard at stake and focuses on the relevant assets for the study area. The vulnerability analysis is based on previous studies on the built environment and complemented with the analysis of transportation and urban infrastructures. Damage assessment is performed associating a qualitative damage rating to each combination of hazard and vulnerability. This operation consists in a GIS-based overlap, performed for each hazardous phenomenon considered and for each element. The methodology is then automated into a GIS-based tool using an ArcGIS® program. Given the eruptive scenarios and the characteristics of the exposed elements, the tool produces expected damage maps. The tool is applied to the Icod Valley (North of Tenerife Island) which is likely to be affected by volcanic phenomena in case of eruption from both the Teide-Pico Viejo volcanic complex and North-West basaltic rift. Results are thematic maps of vulnerability and damage that can be displayed at different levels of detail, depending on the user preferences. The aim of the tool is to facilitate territorial planning and risk management in active volcanic areas.
Evaluation of freeway motorist assist program : final report, September 30, 2009.
DOT National Transportation Integrated Search
2010-02-01
This evaluation of the Motorist Assist (MA) program in St. Louis estimated that MA has an annual benefit-cost ratio (B/C) of 38.25:1 using 2009 dollars. This estimate was based on nationally accepted AASHTO methodology and was based on 1082 secondary...
The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.
A Methodology for Developing Army Acquisition Strategies for an Uncertain Future
2007-01-01
manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are
Regression to fuzziness method for estimation of remaining useful life in power plant components
NASA Astrophysics Data System (ADS)
Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.
2014-10-01
Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.; Smith, Ralph C.; Wang, Yun
1994-01-01
A problem of continued interest concerns the control of vibrations in a flexible structure and the related problem of reducing structure-borne noise in structural acoustic systems. In both cases, piezoceramic patches bonded to the structures have been successfully used as control actuators. Through the application of a controlling voltage, the patches can be used to reduce structural vibrations which in turn lead to methods for reducing structure-borne noise. A PDE-based methodology for modeling, estimating physical parameters, and implementing a feedback control scheme for problems of this type is discussed. While the illustrating example is a circular plate, the methodology is sufficiently general so as to be applicable in a variety of structural and structural acoustic systems.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Estimating the cost of epilepsy in Europe: a review with economic modeling.
Pugliatti, Maura; Beghi, Ettore; Forsgren, Lars; Ekman, Mattias; Sobocki, Patrik
2007-12-01
Based on available epidemiologic, health economic, and international population statistics literature, the cost of epilepsy in Europe was estimated. Europe was defined as the 25 European Union member countries, Iceland, Norway, and Switzerland. Guidelines for epidemiological studies on epilepsy were used for a case definition. A bottom-up prevalence-based cost-of-illness approach, the societal perspective for including the cost items, and the human capital approach as valuation principle for indirect costs were used. The cost estimates were based on selected studies with common methodology and valuation principles. The estimated prevalence of epilepsy in Europe in 2004 was 4.3-7.8 per 1,000. The estimated total cost of the disease in Europe was euro15.5 billion in 2004, indirect cost being the single most dominant cost category (euro8.6 billion). Direct health care costs were euro2.8 billion, outpatient care comprising the largest part (euro1.3 billion). Direct nonmedical cost was euro4.2 billion. That of antiepileptic drugs was euro400 million. The total cost per case was euro2,000-11,500 and the estimated cost per European inhabitant was euro33. Epilepsy is a relevant socioeconomic burden at individual, family, health services, and societal level in Europe. The greater proportion of such burden is outside the formal health care sector, antiepileptic drugs representing a smaller proportion. Lack of economic data from several European countries and other methodological limitations make this report an initial estimate of the cost of epilepsy in Europe. Prospective incidence cost-of-illness studies from well-defined populations and common methodology are encouraged.
Fluence-based and microdosimetric event-based methods for radiation protection in space
NASA Technical Reports Server (NTRS)
Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)
2002-01-01
The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
A quality-based cost model for new electronic systems and products
NASA Astrophysics Data System (ADS)
Shina, Sammy G.; Saigal, Anil
1998-04-01
This article outlines a method for developing a quality-based cost model for the design of new electronic systems and products. The model incorporates a methodology for determining a cost-effective design margin allocation for electronic products and systems and its impact on manufacturing quality and cost. A spreadsheet-based cost estimating tool was developed to help implement this methodology in order for the system design engineers to quickly estimate the effect of design decisions and tradeoffs on the quality and cost of new products. The tool was developed with automatic spreadsheet connectivity to current process capability and with provisions to consider the impact of capital equipment and tooling purchases to reduce the product cost.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
Variational Bayes method for estimating transit route OD flows using APC data.
DOT National Transportation Integrated Search
2017-01-31
The focus of this study is on the use of large quantities of APC data to estimate OD flows : for transit bus routes. Since most OD flow estimation methodologies based on boarding and : alighting counts were developed before the prevalence of APC tech...
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
1985-11-26
etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4
A Practical Guide for Estimating Dietary Fat and Fiber Using Limited Food Frequency Data.
ERIC Educational Resources Information Center
Neale, Anne Victoria; And Others
1992-01-01
A methodology is presented for estimating daily intake of dietary fat and fiber based on limited food frequency data. The procedure, which relies on National Food Consumption Survey data and daily consumption rates, can provide baseline estimates of dietary patterns for health promotion policymakers. (SLD)
A Technical Evaluation of the First Stage of the Mediterranean Regional Project.
ERIC Educational Resources Information Center
Hollister, Robinson
Objectives of this technical evaluation concerning the transfer of experience in the development of human resources were to develop educational plans based upon comprehensive estimates of manpower requirements and to evaluate the methods used in estimating manpower requirements for educational planning. The methodology involved estimates of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGSmore » electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.« less
The IDF Diabetes Atlas methodology for estimating global prevalence of hyperglycaemia in pregnancy.
Linnenkamp, U; Guariguata, L; Beagley, J; Whiting, D R; Cho, N H
2014-02-01
Hyperglycaemia is one of the most prevalent metabolic disorders occurring during pregnancy. Limited data are available on the global prevalence of hyperglycaemia in pregnancy. The International Diabetes Federation (IDF) has developed a methodology for generating estimates of the prevalence of hyperglycaemia in pregnancy, including hyperglycaemia first detected in pregnancy and live births to women with known diabetes, among women of childbearing age (20-49 years). A systematic review of the literature for studies reporting the prevalence of gestational diabetes was conducted. Studies were evaluated and scored to favour those that were representative of a large population, conducted recently, reported age-specific estimates, and case identification was based on blood test. Age-specific prevalence data from studies were entered to produce estimates for five-year age groups using logistic regression to smooth curves, with age as the independent variable. The derived age-specific prevalence was adjusted for differences in diagnostic criteria in the underlying data. Cases of hyperglycaemia in pregnancy were derived from age-specific estimates of fertility and age-specific population estimates. Country-specific estimates were generated for countries with available data. Regional and global estimates were generated based on aggregation and extrapolation for 219 countries and territories. Available fertility rates and diabetes prevalence estimates were used to estimate the proportion of hyperglycaemia in pregnancy that may be due to total diabetes in pregnancy - pregnancy in women with known diabetes and diabetes first detected in pregnancy. The literature review identified 199 studies that were eligible for characterisation and selection. After scoring and exclusion requirements, 46 studies were selected representing 34 countries. More than 50% of selected studies came from Europe and North America and Caribbean. The smallest number of identified studies came from sub-Saharan Africa. The majority of studies were for high-income countries, although low- and middle-income countries were also represented. Prevalence estimates of hyperglycaemia in pregnancy are sensitive to the data from which they are derived. The IDF methodology is a transparent, reproducible, and modifiable method for estimating the burden of hyperglycaemia in pregnancy. More data are needed, in particular from developing countries, to strengthen the methodology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Mogasale, Vittal; Mogasale, Vijayalaxmi V; Ramani, Enusa; Lee, Jung Seok; Park, Ju Yeon; Lee, Kang Sung; Wierzba, Thomas F
2016-01-29
The control of typhoid fever being an important public health concern in low and middle income countries, improving typhoid surveillance will help in planning and implementing typhoid control activities such as deployment of new generation Vi conjugate typhoid vaccines. We conducted a systematic literature review of longitudinal population-based blood culture-confirmed typhoid fever studies from low and middle income countries published from 1(st) January 1990 to 31(st) December 2013. We quantitatively summarized typhoid fever incidence rates and qualitatively reviewed study methodology that could have influenced rate estimates. We used meta-analysis approach based on random effects model in summarizing the hospitalization rates. Twenty-two papers presented longitudinal population-based and blood culture-confirmed typhoid fever incidence estimates from 20 distinct sites in low and middle income countries. The reported incidence and hospitalizations rates were heterogeneous as well as the study methodology across the sites. We elucidated how the incidence rates were underestimated in published studies. We summarized six categories of under-estimation biases observed in these studies and presented potential solutions. Published longitudinal typhoid fever studies in low and middle income countries are geographically clustered and the methodology employed has a potential for underestimation. Future studies should account for these limitations.
Optimal multi-dimensional poverty lines: The state of poverty in Iraq
NASA Astrophysics Data System (ADS)
Ameen, Jamal R. M.
2017-09-01
Poverty estimation based on calories intake is unrealistic. The established concept of multidimensional poverty has methodological weaknesses in the treatment of different dimensions and there is disagreement in methods of combining them into a single poverty line. This paper introduces a methodology to estimate optimal multidimensional poverty lines and uses the Iraqi household socio-economic survey data of 2012 to demonstrate the idea. The optimal poverty line for Iraq is found to be 170.5 Thousand Iraqi Dinars (TID).
Jaciw, Andrew P
2016-06-01
Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.
REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING ...
In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainties in the numerical estimates. In 2006, the National Research Council of the National Academy of Sciences released a report on the health risks from exposure to low levels of ionizing radiation. Cosponsored by the EPA and several other Federal agencies, Health Risks from Exposure to Low Levels of Ionizing Radiation BEIR VII Phase 2 (BEIR VII) primarily addresses cancer and genetic risks from low doses of low-LET radiation. In the draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (White Paper), ORIA proposed changes in EPA’s methodology for estimating radiogenic cancers, based on the contents of BEIR VII and some ancillary information. For the most part, it proposed to adopt the models and methodology recommended in BEIR VII; however, certain modifications and expansions are considered to be desirable or necessary for EPA’s purposes. EPA sought advice from the Agency’s Science Advisory Board on the application of BEIR VII and on issues relating to these modifications and expansions in the Advisory on EPA’s Draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (record # 83044). The SAB issued its Advisory on Jan. 31, 2008 (EPA-SAB-08-
Economic development evaluation based on science and patents
NASA Astrophysics Data System (ADS)
Jokanović, Bojana; Lalic, Bojan; Milovančević, Miloš; Simeunović, Nenad; Marković, Dusan
2017-09-01
Economic development could be achieved through many factors. Science and technology factors could influence economic development drastically. Therefore the main aim in this study was to apply computational intelligence methodology, artificial neural network approach, for economic development estimation based on different science and technology factors. Since economic analyzing could be very challenging task because of high nonlinearity, in this study was applied computational intelligence methodology, artificial neural network approach, to estimate the economic development based on different science and technology factors. As economic development measure, gross domestic product (GDP) was used. As the science and technology factors, patents in different field were used. It was found that the patents in electrical engineering field have the highest influence on the economic development or the GDP.
Geothermal resources and reserves in Indonesia: an updated revision
NASA Astrophysics Data System (ADS)
Fauzi, A.
2015-02-01
More than 300 high- to low-enthalpy geothermal sources have been identified throughout Indonesia. From the early 1980s until the late 1990s, the geothermal potential for power production in Indonesia was estimated to be about 20 000 MWe. The most recent estimate exceeds 29 000 MWe derived from the 300 sites (Geological Agency, December 2013). This resource estimate has been obtained by adding all of the estimated geothermal potential resources and reserves classified as "speculative", "hypothetical", "possible", "probable", and "proven" from all sites where such information is available. However, this approach to estimating the geothermal potential is flawed because it includes double counting of some reserve estimates as resource estimates, thus giving an inflated figure for the total national geothermal potential. This paper describes an updated revision of the geothermal resource estimate in Indonesia using a more realistic methodology. The methodology proposes that the preliminary "Speculative Resource" category should cover the full potential of a geothermal area and form the base reference figure for the resource of the area. Further investigation of this resource may improve the level of confidence of the category of reserves but will not necessarily increase the figure of the "preliminary resource estimate" as a whole, unless the result of the investigation is higher. A previous paper (Fauzi, 2013a, b) redefined and revised the geothermal resource estimate for Indonesia. The methodology, adopted from Fauzi (2013a, b), will be fully described in this paper. As a result of using the revised methodology, the potential geothermal resources and reserves for Indonesia are estimated to be about 24 000 MWe, some 5000 MWe less than the 2013 national estimate.
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
NASA Astrophysics Data System (ADS)
Ribeiro, J. B.; Silva, C.; Mendes, R.
2010-10-01
A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.
A novel methodology for building robust design rules by using design based metrology (DBM)
NASA Astrophysics Data System (ADS)
Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan
2013-03-01
This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.
45 CFR 284.11 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... METHODOLOGY FOR DETERMINING WHETHER AN INCREASE IN A STATE OR TERRITORY'S CHILD POVERTY RATE IS THE RESULT OF... estimating the number and percentage of children in poverty in each State. These methods may include national estimates based on the Current Population Survey; the Small Area Income and Poverty Estimates; the annual...
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Estimation of toll road users value of time
DOT National Transportation Integrated Search
2008-02-01
This research examines a new methodology for prospectively estimating the willingness of travelers to use a toll road by combining travel time saved with the income of the prospective customer base. The purpose of the research is to facilitate networ...
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
NASA Astrophysics Data System (ADS)
Vulpiani, Gianfranco; Ripepe, Maurizio
2017-04-01
The detection and quantitative retrieval of ash plumes is of significant interest due to the environmental, climatic, and socioeconomic effects of ash fallout which might cause hardship and damages in areas surrounding volcanoes, representing a serious hazard to aircrafts. Real-time monitoring of such phenomena is crucial for initializing ash dispersion models. Ground-based and space-borne remote sensing observations provide essential information for scientific and operational applications. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low-level eruptions. Their imagery is available every 10-30 min and suffers from a relatively poor spatial resolution. Moreover, the field of view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and the observations' overall utility is reduced at night. Ground-based microwave weather radars may represent an important tool for detecting and, to a certain extent, mitigating the hazards presented by ash clouds. The possibility of monitoring in all weather conditions at a fairly high spatial resolution (less than a few hundred meters) and every few minutes after the eruption is the major advantage of using ground-based microwave radar systems. Ground-based weather radar systems can also provide data for estimating the ash volume, total mass, and height of eruption clouds. Previous methodological studies have investigated the possibility of using ground-based single- and dual-polarization radar system for the remote sensing of volcanic ash cloud. In the present work, methodology was revised to overcome some limitations related to the assumed microphysics. New scattering simulations based on the T-matrix solution technique were used to set up the parametric algorithms adopted to estimate the mass concentration and ash mean diameter. Furthermore, because quantitative estimation of the erupted materials in the proximity of the volcano's vent is crucial for initializing transportation models, a novel methodology for estimating a volcano eruption's mass discharge rate based on the combination of radar and a thermal camera was developed. We show how it is possible to calculate the mass flow using radar-derived ash concentration and particle diameter at the base of the eruption column using the exit velocity estimated by the thermal camera. The proposed procedure was tested on four Etna eruption episodes that occurred in December 2015 as observed by the available network of C and X band radar systems. The results are congruent with other independent methodologies and observations . The agreement between the total erupted mass derived by the retrieved MDR and the plume concentration can be considered as a self-consistent methodological assessment. Interestingly, the analysis of the polarimetric radar observations allowed us to derive some features of the ash plume, including the size of the eruption column and the height of the gas thrust region.
Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS
NASA Astrophysics Data System (ADS)
Bang, Eugene; Lee, Jiyun
2013-11-01
ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.
Improved population estimates through the use of auxiliary information
Johnson, D.H.; Ralph, C.J.; Scott, J.M.
1981-01-01
When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.
Methods to assess geological CO2 storage capacity: Status and best practice
Heidug, Wolf; Brennan, Sean T.; Holloway, Sam; Warwick, Peter D.; McCoy, Sean; Yoshimura, Tsukasa
2013-01-01
To understand the emission reduction potential of carbon capture and storage (CCS), decision makers need to understand the amount of CO2 that can be safely stored in the subsurface and the geographical distribution of storage resources. Estimates of storage resources need to be made using reliable and consistent methods. Previous estimates of CO2 storage potential for a range of countries and regions have been based on a variety of methodologies resulting in a correspondingly wide range of estimates. Consequently, there has been uncertainty about which of the methodologies were most appropriate in given settings, and whether the estimates produced by these methods were useful to policy makers trying to determine the appropriate role of CCS. In 2011, the IEA convened two workshops which brought together experts for six national surveys organisations to review CO2 storage assessment methodologies and make recommendations on how to harmonise CO2 storage estimates worldwide. This report presents the findings of these workshops and an internationally shared guideline for quantifying CO2 storage resources.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Combined radar-radiometer surface soil moisture and roughness estimation
USDA-ARS?s Scientific Manuscript database
A robust physics-based combined radar-radiometer, or Active-Passive, surface soil moisture and roughness estimation methodology is presented. Soil moisture and roughness retrieval is performed via optimization, i.e., minimization, of a joint objective function which constrains similar resolution rad...
An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet
NASA Astrophysics Data System (ADS)
Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon
2015-08-01
The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.
The cost of vision loss in Canada. 1. Methodology.
Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne
2011-08-01
This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Plate motions and deformations from geologic and geodetic data
NASA Technical Reports Server (NTRS)
Jordan, T. H.
1986-01-01
Research effort on behalf of the Crustal Dynamics Project focused on the development of methodologies suitable for the analysis of space-geodetic data sets for the estimation of crustal motions, in conjunction with results derived from land-based geodetic data, neo-tectonic studies, and other geophysical data. These methodologies were used to provide estimates of both global plate motions and intraplate deformation in the western U.S. Results from the satellite ranging experiment for the rate of change of the baseline length between San Diego and Quincy, California indicated that relative motion between the North American and Pacific plates over the course of the observing period during 1972 to 1982 were consistent with estimates calculated from geologic data averaged over the past few million years. This result, when combined with other kinematic constraints on western U.S. deformation derived from land-based geodesy, neo-tectonic studies, and other geophysical data, places limits on the possible extension of the Basin and Range province, and implies significant deformation is occurring west of the San Andreas fault. A new methodology was developed to analyze vector-position space-geodetic data to provide estimates of relative vector motions of the observing sites. The algorithm is suitable for the reduction of large, inhomogeneous data sets, and takes into account the full position covariances, errors due to poorly resolved Earth orientation parameters and vertical positions, and reduces baises due to inhomogeneous sampling of the data. This methodology was applied to the problem of estimating the rate-scaling parameter of a global plate tectonic model using satellite laser ranging observations over a five-year interval. The results indicate that the mean rate of global plate motions for that interval are consistent with those averaged over several million years, and are not consistent with quiescent or greatly accelerated plate motions. This methodology was also used to provide constraints on deformation in the western U.S. using very long baseline interferometry observations over a two-year period.
NASA Astrophysics Data System (ADS)
Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano
2018-01-01
The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.
Roca, Judith; Reguant, Mercedes; Canet, Olga
2016-11-01
Teaching strategies are essential in order to facilitate meaningful learning and the development of high-level thinking skills in students. To compare three teaching methodologies (problem-based learning, case-based teaching and traditional methods) in terms of the learning outcomes achieved by nursing students. This quasi-experimental research was carried out in the Nursing Degree programme in a group of 74 students who explored the subject of The Oncology Patient through the aforementioned strategies. A performance test was applied based on Bloom's Revised Taxonomy. A significant correlation was found between the intragroup theoretical and theoretical-practical dimensions. Likewise, intergroup differences were related to each teaching methodology. Hence, significant differences were estimated between the traditional methodology (x-=9.13), case-based teaching (x-=12.96) and problem-based learning (x-=14.84). Problem-based learning was shown to be the most successful learning method, followed by case-based teaching and the traditional methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.
Crovelli, Robert A.; Coe, Jeffrey A.
2008-01-01
The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.
Manned Mars mission cost estimate
NASA Technical Reports Server (NTRS)
Hamaker, Joseph; Smith, Keith
1986-01-01
The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.
Extension of the soil conservation service rainfall-runoff methodology for ungaged watersheds
DOT National Transportation Integrated Search
1981-07-01
The estimation of direct runoff for ungaged watersheds is a common problem in : engineering hydrology. The method of the Soil Conservation Services (SCS) is widely used due to its ease of application. Runoff estimates are based upon the soil types an...
Sophocleous, M.
2000-01-01
A practical methodology for recharge characterization was developed based on several years of field-oriented research at 10 sites in the Great Bend Prairie of south-central Kansas. This methodology combines the soil-water budget on a storm-by-storm year-round basis with the resulting watertable rises. The estimated 1985-1992 average annual recharge was less than 50mm/year with a range from 15 mm/year (during the 1998 drought) to 178 mm/year (during the 1993 flood year). Most of this recharge occurs during the spring months. To regionalize these site-specific estimates, an additional methodology based on multiple (forward) regression analysis combined with classification and GIS overlay analyses was developed and implemented. The multiple regression analysis showed that the most influential variables were, in order of decreasing importance, total annual precipitation, average maximum springtime soil-profile water storage, average shallowest springtime depth to watertable, and average springtime precipitation rate. Therefore, four GIS (ARC/INFO) data "layers" or coverages were constructed for the study region based on these four variables, and each such coverage was classified into the same number of data classes to avoid biasing the results. The normalized regression coefficients were employed to weigh the class rankings of each recharge-affecting variable. This approach resulted in recharge zonations that agreed well with the site recharge estimates. During the "Great Flood of 1993," when rainfall totals exceeded normal levels by -200% in the northern portion of the study region, the developed regionalization methodology was tested against such extreme conditions, and proved to be both practical, based on readily available or easily measurable data, and robust. It was concluded that the combination of multiple regression and GIS overlay analyses is a powerful and practical approach to regionalizing small samples of recharge estimates.
NASA Technical Reports Server (NTRS)
1980-01-01
The cost estimation and economic evaluation methodologies presented are consistent with industry practice for assessing capital investment requirements and operating costs of coal conversion systems. All values stated are based on January, 1980 dollars with appropriate recognition of the time value of money. Evaluation of project economic feasibility can be considered a two step process (subject to considerable refinement). First, the costs of the project must be quantified and second, the price at which the product can be manufacturd must be determined. These two major categories are discussed. The summary of methodology is divided into five parts: (1) systems costs, (2)instant plant costs, (3) annual operating costs, (4) escalation and discounting process, and (5) product pricing.
NASA Technical Reports Server (NTRS)
Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh
1992-01-01
A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Estimation of under-reporting in epidemics using approximations.
Gamado, Kokouvi; Streftaris, George; Zachary, Stan
2017-06-01
Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.
Ortega-Villa, Ana Maria; Grantz, Katherine L; Albert, Paul S
2018-06-01
Determining the date of conception is important for estimating gestational age and monitoring whether the fetus and mother are on track in their development and pregnancy. Various methods based on ultrasound have been proposed for dating a pregnancy in high resource countries. However, such techniques may not be available in under-resourced countries. We develop a shared random parameter model for estimating the date of conception using longitudinal assessment of multiple maternal anthropometry and cross-sectional neonatal anthropometry. The methodology is evaluated with a training-test set paradigm as well as with simulations to examine the robustness of the method to model misspecification. We illustrate this new methodology with data from the NICHD Fetal Growth Studies.
USGS Methodology for Assessing Continuous Petroleum Resources
Charpentier, Ronald R.; Cook, Troy A.
2011-01-01
The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
NASA Astrophysics Data System (ADS)
Sepúlveda, J.; Hoyos Ortiz, C. D.
2017-12-01
An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic compared to observed data in spite of the many sources of uncertainty including the sampling volume, the different physical principles of the sensors, the incomplete understanding of the microphysics of precipitation and, the most important, the rapidly varying droplet size distribution.
Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.
2011-01-01
Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Hirt, Evelyn H.; Veeramany, Arun
This research report summaries the development and evaluation of a prototypic enhanced risk monitor (ERM) methodology (framework) that includes alternative risk metrics and uncertainty analysis. This updated ERM methodology accounts for uncertainty in the equipment condition assessment (ECA), the prognostic result, and the probabilistic risk assessment (PRA) model. It is anticipated that the ability to characterize uncertainty in the estimated risk and update the risk estimates in real time based on equipment condition assessment (ECA) will provide a mechanism for optimizing plant performance while staying within specified safety margins. These results (based on impacting active component O&M using real-time equipmentmore » condition information) are a step towards ERMs that, if integrated with AR supervisory plant control systems, can help control O&M costs and improve affordability of advanced reactors.« less
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
An estimate of undiscovered conventional oil and gas resources of the world, 2012
Schenk, Christopher J.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 565 billion barrels of conventional oil and 5,606 trillion cubic feet of undiscovered conventional natural gas in 171 priority geologic provinces of the world, exclusive of the United States.
Precipitation estimation using L-Band and C-Band soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterome...
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
NASA Astrophysics Data System (ADS)
Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra
2004-08-01
In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Effects of Special Use Airspace on Economic Benefits of Direct Flights
NASA Technical Reports Server (NTRS)
Datta, Koushik; Barrington, Craig; Foster, John D. (Technical Monitor)
1996-01-01
A methodology for estimating the economic effects of Special Use Airspace (SUA) on direct route flights is presented in this paper. The methodology is based on evaluating operating costs of aircraft and analyzing the different ground-track distances traveled by flights under different air traffic scenarios. Using this methodology the following objectives are evaluated: optimistic bias of studies that assume accessible SUAs the maximum economic benefit of dynamic use of SUAs and the marginal economic benefit of the dynamic use of individual SUAs.
NASA Astrophysics Data System (ADS)
Weidner, E. F.; Weber, T. C.; Mayer, L. A.
2017-12-01
Quantifying methane flux originating from marine seep systems in climatically sensitive regions is of critically importance for current and future climate studies. Yet, the methane contribution from these systems has been difficult to estimate given the broad spatial scale of the ocean and the heterogeneity of seep activity. One such region is the Eastern Siberian Arctic Sea (ESAS), where bubble release into the shallow water column (<40 meters average depth) facilitates transport of methane to the atmosphere without oxidation. Quantifying the current seep methane flux from the ESAS is necessary to understand not only the total ocean methane budget, but also to provide baseline estimates against which future climate-induced changes can be measured. At the 2016 AGU fall meeting, we presented a new acoustic-based flux methodology using a calibrated broadband split-beam echosounder. The broad (14-24 kHz) bandwidth provides a vertical resolution of 10 cm, making possible the identification of single bubbles. After calibration using 64 mm copper sphere of known backscatter, the acoustic backscatter of individual bubbles is measured and compared to analytical models to estimate bubble radius. Additionally, bubbles are precisely located and traced upwards through the water column to estimate rise velocity. The combination of radius and rise velocity allows for gas flux estimation. Here, we follow up with the completed implementation of this methodology applied to the Herald Canyon region of the western ESAS. From the 68 recognized seeps, bubble radii and rise velocity were computed for more than 550 individual bubbles. The range of bubble radii, 1-6 mm, is comparable to those published by other investigators, while the radius dependent rise velocities are consistent with published models. Methane flux for the Herald Canyon region was estimated by extrapolation from individual seep flux values.
NASA Astrophysics Data System (ADS)
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
Banana orchard inventory using IRS LISS sensors
NASA Astrophysics Data System (ADS)
Nishant, Nilay; Upadhayay, Gargi; Vyas, S. P.; Manjunath, K. R.
2016-04-01
Banana is one of the major crops of India with increasing export potential. It is important to estimate the production and acreage of the crop. Thus, the present study was carried out to evolve a suitable methodology for estimating banana acreage. Area estimation methodology was devised around the fact that unlike other crops, the time of plantation of banana is different for different farmers as per their local practices or conditions. Thus in order to capture the peak signatures, biowindow of 6 months was considered, its NDVI pattern studied and the optimum two months were considered when banana could be distinguished from other competing crops. The final area of banana for the particular growing cycle was computed by integrating the areas of these two months using LISS III data with spatial resolution of 23m. Estimated banana acreage in the three districts were 11857Ha, 15202ha and 11373Ha for Bharuch, Anand and Vadodara respectively with corresponding accuracy of 91.8%, 90% and 88.16%. Study further compared the use of LISS IV data of 5.8m spatial resolution for estimation of banana using object based as well as per-pixel classification and the results were compared with statistical reports for both the approaches. In the current paper we depict the various methodologies to accurately estimate the banana acreage.
NASA Technical Reports Server (NTRS)
Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)
2002-01-01
This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.
Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.
2012-01-01
Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781
William H. Cooke; Dennis M. Jacobs
2002-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land useiland cover information....
Army Training Study: Battalion Training Survey. Volumes 1 and 2.
1978-08-08
mathematical logic in the methodology. II. MAGN ITUJDE-ESTI MAT ION SCALLING A. General Description A unique methodology, Magnitude-Estimation...to 142.) I b " p .’ . -, / 1 ’- " ’. " " . -’ -" ..’- ’ ;’ ’- . "’ .- ’,, • "." -- -. -. -.-. The base conditio (represen.d in T1- sIA , IIA, and IIIA
NASA Astrophysics Data System (ADS)
Sun, Xiaolong; Xiang, Yang; Shi, Zheming
2018-05-01
Groundwater flow models implemented to manage regional water resources require aquifer hydraulic parameters. Traditional methods for obtaining these parameters include laboratory experiments, field tests and model inversions, and each are potentially hindered by their unique limitations. Here, we propose a methodology for estimating hydraulic conductivity and storage coefficients using the spectral characteristics of the coseismic groundwater-level oscillations and seismic Rayleigh waves. The results from Well X10 are consistent with the variations and spectral characteristics of the water-level oscillations and seismic waves and present an estimated hydraulic conductivity of approximately 1 × 10-3 m s-1 and storativity of 15 × 10-6. The proposed methodology for estimating hydraulic parameters in confined aquifers is a practical and novel approach for groundwater management and seismic precursor anomaly analyses.
Remote sensing-based estimation of annual soil respiration at two contrasting forest sites
Gu, Lianhong; Huang, Ni; Black, T. Andrew; ...
2015-11-23
Soil respiration (R s), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this article, we proposed a methodology for the remote estimation of annual R s at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest).
Advanced Machine Learning Emulators of Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.
2017-12-01
Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
Model methodology for estimating pesticide concentration extremes based on sparse monitoring data
Vecchia, Aldo V.
2018-03-22
This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Challenges in the estimation of Net SURvival: The CENSUR working survival group.
Giorgi, R
2016-10-01
Net survival, the survival probability that would be observed, in a hypothetical world, where the cancer of interest would be the only possible cause of death, is a key indicator in population-based cancer studies. Accounting for mortality due to other causes, it allows cross-country comparisons or trends analysis and provides a useful indicator for public health decision-making. The objective of this study was to show how the creation and formalization of a network comprising established research teams, which already had substantial and complementary experience in both cancer survival analysis and methodological development, make it possible to meet challenges and thus provide more adequate tools, to improve the quality and the comparability of cancer survival data, and to promote methodological transfers in areas of emerging interest. The Challenges in the Estimation of Net SURvival (CENSUR) working survival group is composed of international researchers highly skilled in biostatistics, methodology, and epidemiology, from different research organizations in France, the United Kingdom, Italy, Slovenia, and Canada, and involved in French (FRANCIM) and European (EUROCARE) cancer registry networks. The expected advantages are an interdisciplinary, international, synergistic network capable of addressing problems in public health, for decision-makers at different levels; tools for those in charge of net survival analyses; a common methodology that makes unbiased cross-national comparisons of cancer survival feasible; transfer of methods for net survival estimations to other specific applications (clinical research, occupational epidemiology); and dissemination of results during an international training course. The formalization of the international CENSUR working survival group was motivated by a need felt by scientists conducting population-based cancer research to discuss, develop, and monitor implementation of a common methodology to analyze net survival in order to provide useful information for cancer control and cancer policy. A "team science" approach is necessary to address new challenges concerning the estimation of net survival. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
Methodology for computing the burden of disease of adverse events following immunization.
McDonald, Scott A; Nijsten, Danielle; Bollaerts, Kaatje; Bauwens, Jorgen; Praet, Nicolas; van der Sande, Marianne; Bauchau, Vincent; de Smedt, Tom; Sturkenboom, Miriam; Hahné, Susan
2018-03-24
Composite disease burden measures such as disability-adjusted life-years (DALY) have been widely used to quantify the population-level health impact of disease or injury, but application has been limited for the estimation of the burden of adverse events following immunization. Our objective was to assess the feasibility of adapting the DALY approach for estimating adverse event burden. We developed a practical methodological framework, explicitly describing all steps involved: acquisition of relative or absolute risks and background event incidence rates, selection of disability weights and durations, and computation of the years lived with disability (YLD) measure, with appropriate estimation of uncertainty. We present a worked example, in which YLD is computed for 3 recognized adverse reactions following 3 childhood vaccination types, based on background incidence rates and relative/absolute risks retrieved from the literature. YLD provided extra insight into the health impact of an adverse event over presentation of incidence rates only, as severity and duration are additionally incorporated. As well as providing guidance for the deployment of DALY methodology in the context of adverse events associated with vaccination, we also identified where data limitations potentially occur. Burden of disease methodology can be applied to estimate the health burden of adverse events following vaccination in a systematic way. As with all burden of disease studies, interpretation of the estimates must consider the quality and accuracy of the data sources contributing to the DALY computation. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Effect of Body Composition Methodology on Heritability Estimation of Body Fatness
Elder, Sonya J.; Roberts, Susan B.; McCrory, Megan A.; Das, Sai Krupa; Fuss, Paul J.; Pittas, Anastassios G.; Greenberg, Andrew S.; Heymsfield, Steven B.; Dawson-Hughes, Bess; Bouchard, Thomas J.; Saltzman, Edward; Neale, Michael C.
2014-01-01
Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male and female monozygotic twin pairs reared apart or together. Body composition was assessed by six methods – body mass index (BMI), dual energy x-ray absorptiometry (DXA), underwater weighing (UWW), total body water (TBW), bioelectric impedance (BIA), and skinfold thickness. Body fatness was expressed as percent body fat, fat mass, and fat mass/height2 to assess the effect of body fatness expression on heritability estimates. Model-fitting multivariate analyses were used to assess the genetic and environmental components of variance. Mean BMI was 24.5 kg/m2 (range of 17.8–43.4 kg/m2). There was a significant effect of body composition methodology (p<0.001) on heritability estimates, with UWW giving the highest estimate (69%) and BIA giving the lowest estimate (47%) for fat mass/height2. Expression of body fatness as percent body fat resulted in significantly higher heritability estimates (on average 10.3% higher) compared to expression as fat mass/height2 (p=0.015). DXA and TBW methods expressing body fatness as fat mass/height2 gave the least biased heritability assessments, based on the small contribution of specific genetic factors to their genetic variance. A model combining DXA and TBW methods resulted in a relatively low FM/ht2 heritability estimate of 60%, and significant contributions of common and unique environmental factors (22% and 18%, respectively). The body fatness heritability estimate of 60% indicates a smaller contribution of genetic variance to total variance than many previous studies using less powerful research designs have indicated. The results also highlight the importance of environmental factors and possibly genotype by environmental interactions in the etiology of weight gain and the obesity epidemic. PMID:25067962
Duncan C. Lutes; Robert E. Keane
2006-01-01
The Fuel Load method (FL) is used to sample dead and down woody debris, determine depth of the duff/ litter profile, estimate the proportion of litter in the profile, and estimate total vegetative cover and dead vegetative cover. Down woody debris (DWD) is sampled using the planar intercept technique based on the methodology developed by Brown (1974). Pieces of dead...
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...
How to estimate green house gas (GHG) emissions from an excavator by using CAT's performance chart
NASA Astrophysics Data System (ADS)
Hajji, Apif M.; Lewis, Michael P.
2017-09-01
Construction equipment activities are a major part of many infrastructure projects. This type of equipment typically releases large quantities of green house gas (GHG) emissions. GHG emissions may come from fuel consumption. Furthermore, equipment productivity affects the fuel consumption. Thus, an estimating tool based on the construction equipment productivity rate is able to accurately assess the GHG emissions resulted from the equipment activities. This paper proposes a methodology to estimate the environmental impact for a common construction activity. This paper delivers sensitivity analysis and a case study for an excavator based on trench excavation activity. The methodology delivered in this study can be applied to a stand-alone model, or a module that is integrated with other emissions estimators. The GHG emissions are highly correlated to diesel fuel use, which is approximately 10.15 kilograms (kg) of CO2 per gallon of diesel fuel. The results showed that the productivity rate model as the result from multiple regression analysis can be used as the basis for estimating GHG emissions, and also as the framework for developing emissions footprint and understanding the environmental impact from construction equipment activities introduction.
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The Measurement of Term Importance in Automatic Indexing.
ERIC Educational Resources Information Center
Salton, G.; And Others
1981-01-01
Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.K.; Buelt, J.L.; Stottlemyre, J.A.
1991-02-01
Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets ofmore » compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less
William H. Cooke; Dennis M. Jacobs
2005-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....
Verhaeghe, Nick; Lievens, Delfine; Annemans, Lieven; Vander Laenen, Freya; Putman, Koen
2016-01-01
Alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals' use is associated with a higher likelihood of developing several diseases and injuries and, as a consequence, considerable health-care expenditures. There is yet a lack of consistent methodologies to estimate the economic impact of addictive substances to society. The aim was to assess the methodological approaches applied in social cost studies estimating the economic impact of alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals. A systematic literature review through the electronic databases, Medline (PubMed) and Web of Science, was performed. Studies in English published from 1997 examining the social costs of the addictive substances alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals were eligible for inclusion. Twelve social cost studies met the inclusion criteria. In all studies, the direct and indirect costs were measured, but the intangible costs were seldom taken into account. A wide variety in cost items included across studies was observed. Sensitivity analyses to address the uncertainty around certain cost estimates were conducted in eight studies considered in the review. Differences in cost items included in cost-of-illness studies limit the comparison across studies. It is clear that it is difficult to deal with all consequences of substance use in cost-of-illness studies. Future social cost studies should be based on sound methodological principles in order to result in more reliable cost estimates of the economic burden of substance use.
Assessment of Undiscovered Gas Resources of the Eastern Oregon and Washington Province, 2006
Brownfield, Michael E.; Tennyson, Marilyn E.; Ahlbrandt, Thomas S.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Pollastro, Richard M.; Schenk, Christopher J.
2006-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a mean of 2.4 trillion cubic feet (TCF) of undiscovered natural gas in the Eastern Oregon and Washington Province. More than 90 percent, or 2.1 TCF, of the estimated undiscovered natural gas is continuous gas estimated to be trapped in Tertiary rocks overlain by the Columbia River Basalt Group.
Gyrokinetic modelling of the quasilinear particle flux for plasmas with neutral-beam fuelling
NASA Astrophysics Data System (ADS)
Narita, E.; Honda, M.; Nakata, M.; Yoshida, M.; Takenaga, H.; Hayashi, N.
2018-02-01
A quasilinear particle flux is modelled based on gyrokinetic calculations. The particle flux is estimated by determining factors, namely, coefficients of off-diagonal terms and a particle diffusivity. In this paper, the methodology to estimate the factors is presented using a subset of JT-60U plasmas. First, the coefficients of off-diagonal terms are estimated by linear gyrokinetic calculations. Next, to obtain the particle diffusivity, a semi-empirical approach is taken. Most experimental analyses for particle transport have assumed that turbulent particle fluxes are zero in the core region. On the other hand, even in the stationary state, the plasmas in question have a finite turbulent particle flux due to neutral-beam fuelling. By combining estimates of the experimental turbulent particle flux and the coefficients of off-diagonal terms calculated earlier, the particle diffusivity is obtained. The particle diffusivity should reflect a saturation amplitude of instabilities. The particle diffusivity is investigated in terms of the effects of the linear instability and linear zonal flow response, and it is found that a formula including these effects roughly reproduces the particle diffusivity. The developed framework for prediction of the particle flux is flexible to add terms neglected in the current model. The methodology to estimate the quasilinear particle flux requires so low computational cost that a database consisting of the resultant coefficients of off-diagonal terms and particle diffusivity can be constructed to train a neural network. The development of the methodology is the first step towards a neural-network-based particle transport model for fast prediction of the particle flux.
Gallart, F; Llorens, P; Latron, J; Cid, N; Rieradevall, M; Prat, N
2016-09-15
Hydrological data for assessing the regime of temporary rivers are often non-existent or scarce. The scarcity of flow data makes impossible to characterize the hydrological regime of temporary streams and, in consequence, to select the correct periods and methods to determine their ecological status. This is why the TREHS software is being developed, in the framework of the LIFE Trivers project. It will help managers to implement adequately the European Water Framework Directive in this kind of water body. TREHS, using the methodology described in Gallart et al. (2012), defines six transient 'aquatic states', based on hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Because of its qualitative nature, this approach allows using alternative methodologies to assess the regime of temporary rivers when there are no observed flow data. These methods, based on interviews and high-resolution aerial photographs, were tested for estimating the aquatic regime of temporary rivers. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE Spain) with recurrent zero-flow periods were selected to validate this methodology. On the one hand, non-structured interviews were conducted with inhabitants of villages near the gauging stations. On the other hand, the historical series of available orthophotographs were examined. Flow records measured at the gauging stations were used to validate the alternative methods. Flow permanence in the reaches was estimated reasonably by the interviews and adequately by aerial photographs, when compared with the values estimated using daily flows. The degree of seasonality was assessed only roughly by the interviews. The recurrence of disconnected pools was not detected by flow records but was estimated with some divergences by the two methods. The combination of the two alternative methods allows substituting or complementing flow records, to be updated in the future through monitoring by professionals and citizens. Copyright © 2016 Elsevier B.V. All rights reserved.
Campbell, J Elliott; Moen, Jeremie C; Ney, Richard A; Schnoor, Jerald L
2008-03-01
Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively.
Effect of body composition methodology on heritability estimation of body fatness
USDA-ARS?s Scientific Manuscript database
Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male ...
Social Costs of Gambling in the Czech Republic 2012.
Winkler, Petr; Bejdová, Markéta; Csémy, Ladislav; Weissová, Aneta
2017-12-01
Evidence about social costs of gambling is scarce and the methodology for their calculation has been a subject to strong criticism. We aimed to estimate social costs of gambling in the Czech Republic 2012. This retrospective, prevalence based cost of illness study builds on the revised methodology of Australian Productivity Commission. Social costs of gambling were estimated by combining epidemiological and economic data. Prevalence data on negative consequences of gambling were taken from existing national epidemiological studies. Economic data were taken from various national and international sources. Consequences of problem and pathological gambling only were taken into account. In 2012, the social costs of gambling in the Czech Republic were estimated to range between 541,619 and 619,608 thousands EUR. While personal and family costs accounted for 63% of all social costs, direct medical costs were estimated to range from 0.25 to 0.28% of all social costs only. This is the first study which estimates social costs of gambling in any of the Central and East European countries. It builds upon the solid evidence about prevalence of gambling related problems in the Czech Republic and satisfactorily reliable economic data. However, there is a number of limitations stemming from assumptions that were made, which suggest that the methodology for the calculation of the social costs of gambling needs further development.
Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash
A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
A methodology for estimating risks associated with landslides of contaminated soil into rivers.
Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars
2014-02-15
Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.
Accelerated Aging in Electrolytic Capacitors for Prognostics
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank
2012-01-01
The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.
Devleesschauwer, Brecht; Haagsma, Juanita A.; Angulo, Frederick J.; Bellinger, David C.; Cole, Dana; Döpfer, Dörte; Fazil, Aamir; Fèvre, Eric M.; Gibb, Herman J.; Hald, Tine; Kirk, Martyn D.; Lake, Robin J.; Maertens de Noordhout, Charline; Mathers, Colin D.; McDonald, Scott A.; Pires, Sara M.; Speybroeck, Niko; Thomas, M. Kate; Torgerson, Paul R.; Wu, Felicia; Havelaar, Arie H.; Praet, Nicolas
2015-01-01
Background The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs). This paper describes the methodological framework developed by FERG's Computational Task Force to transform epidemiological information into FBD burden estimates. Methods and Findings The global and regional burden of 31 FBDs was quantified, along with limited estimates for 5 other FBDs, using Disability-Adjusted Life Years in a hazard- and incidence-based approach. To accomplish this task, the following workflow was defined: outline of disease models and collection of epidemiological data; design and completion of a database template; development of an imputation model; identification of disability weights; probabilistic burden assessment; and estimating the proportion of the disease burden by each hazard that is attributable to exposure by food (i.e., source attribution). All computations were performed in R and the different functions were compiled in the R package 'FERG'. Traceability and transparency were ensured by sharing results and methods in an interactive way with all FERG members throughout the process. Conclusions We developed a comprehensive framework for estimating the global burden of FBDs, in which methodological simplicity and transparency were key elements. All the tools developed have been made available and can be translated into a user-friendly national toolkit for studying and monitoring food safety at the local level. PMID:26633883
Coal resources available for development; a methodology and pilot study
Eggleston, Jane R.; Carter, M. Devereux; Cobb, James C.
1990-01-01
Coal accounts for a major portion of our Nation's energy supply in projections for the future. A demonstrated reserve base of more than 475 billion short tons, as the Department of Energy currently estimates, indicates that, on the basis of today's rate of consumption, the United States has enough coal to meet projected energy needs for almost 200 years. However, the traditional procedures used for estimating the demonstrated reserve base do not account for many environmental and technological restrictions placed on coal mining. A new methodology has been developed to determine the quantity of coal that might actually be available for mining under current and foreseeable conditions. This methodology is unique in its approach, because it applies restrictions to the coal resource before it is mined. Previous methodologies incorporated restrictions into the recovery factor (a percentage), which was then globally applied to the reserve (minable coal) tonnage to derive a recoverable coal tonnage. None of the previous methodologies define the restrictions and their area and amount of impact specifically. Because these restrictions and their impacts are defined in this new methodology, it is possible to achieve more accurate and specific assessments of available resources. This methodology has been tested in a cooperative project between the U.S. Geological Survey and the Kentucky Geological Survey on the Matewan 7.5-minute quadrangle in eastern Kentucky. Pertinent geologic, mining, land-use, and technological data were collected, assimilated, and plotted. The National Coal Resources Data System was used as the repository for data, and its geographic information system software was applied to these data to eliminate restricted coal and quantify that which is available for mining. This methodology does not consider recovery factors or the economic factors that would be considered by a company before mining. Results of the pilot study indicate that, of the estimated original 986.5 million short tons of coal resources in Kentucky's Matewan quadrangle, 13 percent has been mined, 2 percent is restricted by land-use considerations, and 23 percent is restricted by technological considerations. This leaves an estimated 62 percent of the original resource, or approximately 612 million short tons available for mining. However, only 44 percent of this available coal (266 million short tons) will meet current Environmental Protection Agency new-source performance standards for sulfur emissions from electric generating plants in the United States. In addition, coal tonnage lost during mining and cleaning would further reduce the amount of coal actually arriving at the market.
This report focuses on the methodology for estimating growth in NR engine populations as used in the MOVES201X-NONROAD emission inventory model. MOVES NR growth rates start with base year engine populations and estimate growth in the populations of NR engines, while applying cons...
Lifetime Earnings Estimates for Men and Women in the United States: 1979.
ERIC Educational Resources Information Center
Burkhead, Dan L.
1983-01-01
This report presents estimates of expected lifetime earnings based on data collected in the March Current Population Survey by age, sex, and educational attainment for 1978, 1979, and 1980. The text describes the data tables and charts, methodology, and limitations of the data. The eight figures and five detailed tables present lifetime earning…
A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods
ERIC Educational Resources Information Center
Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich
2013-01-01
The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…
Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001
L. S. Heath; R. A. Birdsey; D. W. Williams
2002-01-01
The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...
Introducing a methodology for estimating duration of surgery in health services research.
Redelmeier, Donald A; Thiruchelvam, Deva; Daneman, Nick
2008-09-01
The duration of surgery is an indicator for the quality, risks, and efficiency of surgical procedures. We introduce a new methodology for assessing the duration of surgery based on anesthesiology billing records, along with reviewing its fundamental logic and limitations. The validity of the methodology was assessed through a population-based cohort of patients (n=480,986) undergoing elective operations in 246 Ontario hospitals with 1,084 anesthesiologists between April 1, 1992 and March 31, 2002 (10 years). The weaknesses of the methodology relate to missing data, self-serving exaggerations by providers, imprecisions from clinical diversity, upper limits due to accounting regulations, fluctuations from updates over the years, national differences in reimbursement schedules, and the general failings of claims base analyses. The strengths of the methodology are in providing data that match clinical experiences, correspond to chart review, are consistent over time, can detect differences where differences would be anticipated, and might have implications for examining patient outcomes after long surgical times. We suggest that an understanding and application of large studies of surgical duration may help scientists explore selected questions concerning postoperative complications.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Temporal variability patterns in solar radiation estimations
NASA Astrophysics Data System (ADS)
Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.
2016-06-01
In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c
St Charles, Frank Kelley; McAughey, John; Shepperd, Christopher J
2013-06-01
Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10(-5) Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10(-7) Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker.
Hyperspectral signature analysis of skin parameters
NASA Astrophysics Data System (ADS)
Vyas, Saurabh; Banerjee, Amit; Garza, Luis; Kang, Sewon; Burlina, Philippe
2013-02-01
The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.
An experimental system for flood risk forecasting at global scale
NASA Astrophysics Data System (ADS)
Alfieri, L.; Dottori, F.; Kalas, M.; Lorini, V.; Bianchi, A.; Hirpa, F. A.; Feyen, L.; Salamon, P.
2016-12-01
Global flood forecasting and monitoring systems are nowadays a reality and are being applied by an increasing range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasts, combining streamflow estimations with expected inundated areas and flood impacts. To this end, we have developed an experimental procedure for near-real time flood mapping and impact assessment based on the daily forecasts issued by the Global Flood Awareness System (GloFAS). The methodology translates GloFAS streamflow forecasts into event-based flood hazard maps based on the predicted flow magnitude and the forecast lead time and a database of flood hazard maps with global coverage. Flood hazard maps are then combined with exposure and vulnerability information to derive flood risk. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To further increase the reliability of the proposed methodology we integrated model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification of impact forecasts. The preliminary tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management. In particular, the link with social media is crucial for improving the accuracy of impact predictions.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
Laurinavicius, Arvydas; Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Dasevicius, Darius; Elie, Nicolas; Iqbal, Yasir; Bor, Catherine
2014-01-01
Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists' VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particularfor the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers.
2014-01-01
Introduction Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Methods Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. Results ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists’ VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particular for the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Conclusions Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers. PMID:24708745
Cook, Troy A.
2013-01-01
Estimated ultimate recoveries (EURs) are a key component in determining productivity of wells in continuous-type oil and gas reservoirs. EURs form the foundation of a well-performance-based assessment methodology initially developed by the U.S. Geological Survey (USGS; Schmoker, 1999). This methodology was formally reviewed by the American Association of Petroleum Geologists Committee on Resource Evaluation (Curtis and others, 2001). The EUR estimation methodology described in this paper was used in the 2013 USGS assessment of continuous oil resources in the Bakken and Three Forks Formations and incorporates uncertainties that would not normally be included in a basic decline-curve calculation. These uncertainties relate to (1) the mean time before failure of the entire well-production system (excluding economics), (2) the uncertainty of when (and if) a stable hyperbolic-decline profile is revealed in the production data, (3) the particular formation involved, (4) relations between initial production rates and a stable hyperbolic-decline profile, and (5) the final behavior of the decline extrapolation as production becomes more dependent on matrix storage.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
NASA Astrophysics Data System (ADS)
Lugo, J. M.; Oliva, A. I.
2017-02-01
The thermal effusivity of gold, aluminum, and copper thin films of nanometric thickness (20 nm to 200 nm) was investigated in terms of the films' thickness. The metallic thin films were deposited onto glass substrates by thermal evaporation, and the thermal effusivity was estimated by using experimental parameters such as the specific heat, thermal conductivity, and thermal diffusivity values obtained at room conditions. The specific heat, thermal conductivity, and thermal diffusivity values of the metallic thin films are determined with a methodology based on the behavior of the thermal profiles of the films when electrical pulses of few microseconds are applied at room conditions. For all the investigated materials, the thermal effusivity decreases with decreased thickness. The thermal effusivity values estimated by the presented methodology are consistent with other reported values obtained under vacuum conditions and more elaborated methodologies.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Estimating economic losses from earthquakes using an empirical approach
Jaiswal, Kishor; Wald, David J.
2013-01-01
We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane
2003-09-01
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
A flight-test methodology for identification of an aerodynamic model for a V/STOL aircraft
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Mcnally, B. David
1988-01-01
Described is a flight test methodology for developing a data base to be used to identify an aerodynamic model of a vertical and short takeoff and landing (V/STOL) fighter aircraft. The aircraft serves as a test bed at Ames for ongoing research in advanced V/STOL control and display concepts. The flight envelope to be modeled includes hover, transition to conventional flight, and back to hover, STOL operation, and normaL cruise. Although the aerodynamic model is highly nonlinear, it has been formulated to be linear in the parameters to be identified. Motivation for the flight test methodology advocated in this paper is based on the choice of a linear least-squares method for model identification. The paper covers elements of the methodology from maneuver design to the completed data base. Major emphasis is placed on the use of state estimation with tracking data to ensure consistency among maneuver variables prior to their entry into the data base. The design and processing of a typical maneuver is illustrated.
NASA Astrophysics Data System (ADS)
Habib, Gazala; Venkataraman, Chandra; Shrivastava, Manish; Banerjee, Rangan; Stehr, J. W.; Dickerson, Russell R.
2004-09-01
The dominance of biofuel combustion emissions in the Indian region, and the inherently large uncertainty in biofuel use estimates based on cooking energy surveys, prompted the current work, which develops a new methodology for estimating biofuel consumption for cooking. This is based on food consumption statistics, and the specific energy for food cooking. Estimated biofuel consumption in India was 379 (247-584) Tg yr-1. New information on the user population of different biofuels was compiled at a state level, to derive the biofuel mix, which varied regionally and was 74:16:10%, respectively, of fuelwood, dung cake and crop waste, at a national level. Importantly, the uncertainty in biofuel use from quantitative error assessment using the new methodology is around 50%, giving a narrower bound than in previous works. From this new activity data and currently used black carbon emission factors, the black carbon (BC) emissions from biofuel combustion were estimated as 220 (65-760) Gg yr-1. The largest BC emissions were from fuelwood (75%), with lower contributions from dung cake (16%) and crop waste (9%). The uncertainty of 245% in the BC emissions estimate is now governed by the large spread in BC emission factors from biofuel combustion (122%), implying the need for reducing this uncertainty through measurements. Emission factors of SO2 from combustion of biofuels widely used in India were measured, and ranged 0.03-0.08 g kg-1 from combustion of two wood species, 0.05-0.20 g kg-1 from 10 crop waste types, and 0.88 g kg-1 from dung cake, significantly lower than currently used emission factors for wood and crop waste. Estimated SO2 emissions from biofuels of 75 (36-160) Gg yr-1 were about a factor of 3 lower than that in recent studies, with a large contribution from dung cake (73%), followed by fuelwood (21%) and crop waste (6%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steer, Ian; Madore, Barry F.; Mazzarella, Joseph M.
Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics. Researchers use them to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006 the NASA/IPAC Extragalactic Database (NED) began making available a comprehensive compilation of redshift-independent extragalactic distance estimates. A decade later, this compendium of distances (NED-D) now contains more than 100,000 individual estimates based on primary and secondary indicators, available for more thanmore » 28,000 galaxies, and compiled from over 2000 references in the refereed astronomical literature. This paper describes the methodology, content, and use of NED-D, and addresses challenges to be overcome in compiling such distances. Currently, 75 different distance indicators are in use. We include a figure that facilitates comparison of the indicators with significant numbers of estimates in terms of the minimum, 25th percentile, median, 75th percentile, and maximum distances spanned. Brief descriptions of the indicators, including examples of their use in the database, are given in an appendix.« less
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
NASA Astrophysics Data System (ADS)
Luque, Pablo; Mántaras, Daniel A.; Fidalgo, Eloy; Álvarez, Javier; Riva, Paolo; Girón, Pablo; Compadre, Diego; Ferran, Jordi
2013-12-01
The main objective of this work is to determine the limit of safe driving conditions by identifying the maximal friction coefficient in a real vehicle. The study will focus on finding a method to determine this limit before reaching the skid, which is valuable information in the context of traffic safety. Since it is not possible to measure the friction coefficient directly, it will be estimated using the appropriate tools in order to get the most accurate information. A real vehicle is instrumented to collect information of general kinematics and steering tie-rod forces. A real-time algorithm is developed to estimate forces and aligning torque in the tyres using an extended Kalman filter and neural networks techniques. The methodology is based on determining the aligning torque; this variable allows evaluation of the behaviour of the tyre. It transmits interesting information from the tyre-road contact and can be used to predict the maximal tyre grip and safety margin. The maximal grip coefficient is estimated according to a knowledge base, extracted from computer simulation of a high detailed three-dimensional model, using Adams® software. The proposed methodology is validated and applied to real driving conditions, in which maximal grip and safety margin are properly estimated.
Vision based techniques for rotorcraft low altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Suorsa, Ray; Smith, Philip
1991-01-01
An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.
Lago, M. A.; Rúperez, M. J.; Martínez-Martínez, F.; Martínez-Sanchis, S.; Bakic, P. R.; Monserrat, C.
2015-01-01
This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work. PMID:27103760
Lago, M A; Rúperez, M J; Martínez-Martínez, F; Martínez-Sanchis, S; Bakic, P R; Monserrat, C
2015-11-30
This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work.
Assessment of undiscovered conventional oil and gas resources of Thailand
Schenk, Chris
2011-01-01
The U.S. Geological Survey estimated mean volumes of 1.6 billion barrels of undiscovered conventional oil and 17 trillion cubic feet of undiscovered conventional natural gas in three geologic provinces of Thailand using a geology-based methodology. Most of the undiscovered conventional oil and gas resource is estimated to be in the area known as offshore Thai Basin province.
Validation of vision-based range estimation algorithms using helicopter flight data
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1993-01-01
The objective of this research was to demonstrate the effectiveness of an optic flow method for passive range estimation using a Kalman-filter implementation with helicopter flight data. This paper is divided into the following areas: (1) ranging algorithm; (2) flight experiment; (3) analysis methodology; (4) results; and (5) concluding remarks. The discussion is presented in viewgraph format.
Estimating erosion risks associated with logging and forest roads in northwestern California
Raymond M. Rice; Jack Lewis
1991-01-01
Abstract - Erosion resulting from logging and road building has long been a concern to forest managers and the general public. An objective methodology was developed to estimate erosion risk on forest roads and in harvest areas on private land in northwestern California. It was based on 260 plots sampled from the area harvested under 415 Timber Harvest Plans...
Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent
2015-01-01
The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity (“residence times”) of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales. PMID:26261985
Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent
2015-01-01
The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity ("residence times") of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales.
NASA Technical Reports Server (NTRS)
York, P.; Labell, R. W.
1980-01-01
An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.
Cost benefits of advanced software: A review of methodology used at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla N.
1993-01-01
To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel
2013-01-01
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007
Le Quere, C. [University of East Anglia, Norwich UK; Moriarty, R. [University of East Anglia, Norwich UK; Andrew, R. M. [Univ. of Oslo (Norway); Canadell, J. G. [Commonwealth Scientific and Industrial Research Organization (CSIRO) Oceans and Atmosphere, Canberra ACT (Australia); Sitch, S. [University of Exeter, Exter UK; Boden, T. A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States) Carbon Dioxide Information Analysis Center (CDIAC); al., et
2015-01-01
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates as well as consistency within and among components, alongside methodology and data limitations.
Evaluating Payments for Environmental Services: Methodological Challenges
2016-01-01
Over the last fifteen years, Payments for Environmental Services (PES) schemes have become very popular environmental policy instruments, but the academic literature has begun to question their additionality. The literature attempts to estimate the causal effect of these programs by applying impact evaluation (IE) techniques. However, PES programs are complex instruments and IE methods cannot be directly applied without adjustments. Based on a systematic review of the literature, this article proposes a framework for the methodological process of designing an IE for PES schemes. It revises and discusses the methodological choices at each step of the process and proposes guidelines for practitioners. PMID:26910850
Social psychology. Response to Comment on "Morality in everyday life".
Hofmann, Wilhelm; Wisneski, Daniel C; Brandt, Mark J; Skitka, Linda J
2015-05-15
Voelkle challenges our conclusions regarding the relationship between morality and momentary happiness/sense of purpose based on methodological concerns. We show that our main conclusions are not affected by this methodological critique and clarify that the discrepancies between our and Voelkle's effect size estimates can be reconciled by the realization that two different (but compatible) research questions are being asked. Copyright © 2015, American Association for the Advancement of Science.
Ortensi, Livia Elisa; Farina, Patrizia; Leye, Els
2018-01-12
Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods. The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis. It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014-2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum. Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less
On methods of estimating cosmological bulk flows
NASA Astrophysics Data System (ADS)
Nusser, Adi
2016-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Molinos-Senante, María; Hernández-Sancho, Francesc; Sala-Garrido, Ramón
2012-01-01
The concept of sustainability involves the integration of economic, environmental, and social aspects and this also applies in the field of wastewater treatment. Economic feasibility studies are a key tool for selecting the most appropriate option from a set of technological proposals. Moreover, these studies are needed to assess the viability of transferring new technologies from pilot-scale to full-scale. In traditional economic feasibility studies, the benefits that have no market price, such as environmental benefits, are not considered and are therefore underestimated. To overcome this limitation, we propose a new methodology to assess the economic viability of wastewater treatment technologies that considers internal and external impacts. The estimation of the costs is based on the use of cost functions. To quantify the environmental benefits from wastewater treatment, the distance function methodology is proposed to estimate the shadow price of each pollutant removed in the wastewater treatment. The application of this methodological approach by decision makers enables the calculation of the true costs and benefits associated with each alternative technology. The proposed methodology is presented as a useful tool to support decision making.
Fuzzy Set Methods for Object Recognition in Space Applications
NASA Technical Reports Server (NTRS)
Keller, James M. (Editor)
1992-01-01
Progress on the following four tasks is described: (1) fuzzy set based decision methodologies; (2) membership calculation; (3) clustering methods (including derivation of pose estimation parameters), and (4) acquisition of images and testing of algorithms.
How EIA Estimates Natural Gas Production
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
NASA Astrophysics Data System (ADS)
Gilani, H., Sr.; Ganguly, S.; Zhang, G.; Koju, U. A.; Murthy, M. S. R.; Nemani, R. R.; Manandhar, U.; Thapa, G. J.
2015-12-01
Nepal is a landlocked country with 39% forest cover of the total land area (147,181 km2). Under the Forest Carbon Partnership Facility (FCPF) and implemented by the World Bank (WB), Nepal chosen as one of four countries best suitable for results-based payment system for Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+) scheme. At the national level Landsat based, from 1990 to 2000 the forest area has declined by 2%, i.e. by 1467 km2, whereas from 2000 to 2010 it has declined only by 0.12% i.e. 176 km2. A cost effective monitoring and evaluation system for REDD+ requires a balanced approach of remote sensing and ground measurements. This paper provides, for Nepal a cost effective and operational 30 m Above Ground Biomass (AGB) estimation and mapping methodology using freely available satellite data integrated with field inventory. Leaf Area Index (LAI) generated based on propose methodology by Ganguly et al. (2012) using Landsat-8 the OLI cloud free images. To generate tree canopy height map, a density scatter graph between the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud, and Land Elevation Satellite (ICESat) estimated maximum height and Landsat LAI nearest to the center coordinates of the GLAS shots show a moderate but significant exponential correlation (31.211*LAI0.4593, R2= 0.33, RMSE=13.25 m). From the field well distributed circular (750m2 and 500m2), 1124 field plots (0.001% representation of forest cover) measured which were used for estimation AGB (ton/ha) using Sharma et al. (1990) proposed equations for all tree species of Nepal. A satisfactory linear relationship (AGB = 8.7018*Hmax-101.24, R2=0.67, RMSE=7.2 ton/ha) achieved between maximum canopy height (Hmax) and AGB (ton/ha). This cost effective and operational methodology is replicable, over 5-10 years with minimum ground samples through integration of satellite images. Developed AGB used to produce optimum fuel wood scenarios using population and road accessibility datasets.
Mangen, Marie-Josée J.; Plass, Dietrich; Havelaar, Arie H.; Gibbons, Cheryl L.; Cassini, Alessandro; Mühlberger, Nikolai; van Lier, Alies; Haagsma, Juanita A.; Brooke, R. John; Lai, Taavi; de Waure, Chiara; Kramarz, Piotr; Kretzschmar, Mirjam E. E.
2013-01-01
In 2009, the European Centre for Disease Prevention and Control initiated the ‘Burden of Communicable Diseases in Europe (BCoDE)’ project to generate evidence-based and comparable burden-of-disease estimates of infectious diseases in Europe. The burden-of-disease metric used was the Disability-Adjusted Life Year (DALY), composed of years of life lost due to premature death (YLL) and due to disability (YLD). To better represent infectious diseases, a pathogen-based approach was used linking incident cases to sequelae through outcome trees. Health outcomes were included if an evidence-based causal relationship between infection and outcome was established. Life expectancy and disability weights were taken from the Global Burden of Disease Study and alternative studies. Disease progression parameters were based on literature. Country-specific incidence was based on surveillance data corrected for underestimation. Non-typhoidal Salmonella spp. and Campylobacter spp. were used for illustration. Using the incidence- and pathogen-based DALY approach the total burden for Salmonella spp. and Campylobacter spp. was estimated at 730 DALYs and at 1,780 DALYs per year in the Netherlands (average of 2005–2007). Sequelae accounted for 56% and 82% of the total burden of Salmonella spp. and Campylobacter spp., respectively. The incidence- and pathogen-based DALY methodology allows in the case of infectious diseases a more comprehensive calculation of the disease burden as subsequent sequelae are fully taken into account. Not considering subsequent sequelae would strongly underestimate the burden of infectious diseases. Estimates can be used to support prioritisation and comparison of infectious diseases and other health conditions, both within a country and between countries. PMID:24278167
Counting the cost: estimating the economic benefit of pedophile treatment programs.
Shanahan, M; Donato, R
2001-04-01
The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.
Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Tao, Y.
2016-12-01
Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; Liu, Jianhong; Mou, Minjie
2013-01-01
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this study, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptake (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. This methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data. PMID:24386441
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Van Til, Harrison J; Wood, Eric W
A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any typemore » of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.« less
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; ...
2013-12-27
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this paper, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptakemore » (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. In conclusion, this methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
A Novel Methodology to Estimate the Treatment Effect in Presence of Highly Variable Placebo Response
Gomeni, Roberto; Goyal, Navin; Bressolle, Françoise; Fava, Maurizio
2015-01-01
One of the main reasons for the inefficiency of multicenter randomized clinical trials (RCTs) in depression is the excessively high level of placebo response. The aim of this work was to propose a novel methodology to analyze RCTs based on the assumption that centers with high placebo response are less informative than the other centers for estimating the ‘true' treatment effect (TE). A linear mixed-effect modeling approach for repeated measures (MMRM) was used as a reference approach. The new method for estimating TE was based on a nonlinear longitudinal modeling of clinical scores (NLMMRM). NLMMRM estimates TE by associating a weighting factor to the data collected in each center. The weight was defined by the posterior probability of detecting a clinically relevant difference between active treatment and placebo at that center. Data from five RCTs in depression were used to compare the performance of MMRM with NLMMRM. The results of the analyses showed an average improvement of ~15% in the TE estimated with NLMMRM when the center effect was included in the analyses. Opposite results were observed with MMRM: TE estimate was reduced by ~4% when the center effect was considered as covariate in the analysis. The novel NLMMRM approach provides a tool for controlling the confounding effect of high placebo response, to increase signal detection and to provide a more reliable estimate of the ‘true' TE by controlling false negative results associated with excessively high placebo response. PMID:25895454
Risk assessment of groundwater level variability using variable Kriging methods
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49. Kitanidis, P. K. (1997). Introduction to geostatistics, Cambridge: University Press.
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
McAughey, John; Shepperd, Christopher J.
2013-01-01
Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10−5 Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10−7 Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker. PMID:23742081
Development of a Probabilistic Assessment Methodology for Evaluation of Carbon Dioxide Storage
Burruss, Robert A.; Brennan, Sean T.; Freeman, P.A.; Merrill, Matthew D.; Ruppert, Leslie F.; Becker, Mark F.; Herkelrath, William N.; Kharaka, Yousif K.; Neuzil, Christopher E.; Swanson, Sharon M.; Cook, Troy A.; Klett, Timothy R.; Nelson, Philip H.; Schenk, Christopher J.
2009-01-01
This report describes a probabilistic assessment methodology developed by the U.S. Geological Survey (USGS) for evaluation of the resource potential for storage of carbon dioxide (CO2) in the subsurface of the United States as authorized by the Energy Independence and Security Act (Public Law 110-140, 2007). The methodology is based on USGS assessment methodologies for oil and gas resources created and refined over the last 30 years. The resource that is evaluated is the volume of pore space in the subsurface in the depth range of 3,000 to 13,000 feet that can be described within a geologically defined storage assessment unit consisting of a storage formation and an enclosing seal formation. Storage assessment units are divided into physical traps (PTs), which in most cases are oil and gas reservoirs, and the surrounding saline formation (SF), which encompasses the remainder of the storage formation. The storage resource is determined separately for these two types of storage. Monte Carlo simulation methods are used to calculate a distribution of the potential storage size for individual PTs and the SF. To estimate the aggregate storage resource of all PTs, a second Monte Carlo simulation step is used to sample the size and number of PTs. The probability of successful storage for individual PTs or the entire SF, defined in this methodology by the likelihood that the amount of CO2 stored will be greater than a prescribed minimum, is based on an estimate of the probability of containment using present-day geologic knowledge. The report concludes with a brief discussion of needed research data that could be used to refine assessment methodologies for CO2 sequestration.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
Verification of Agricultural Methane Emission Inventories
NASA Astrophysics Data System (ADS)
Desjardins, R. L.; Pattey, E.; Worth, D. E.; VanderZaag, A.; Mauder, M.; Srinivasan, R.; Worthy, D.; Sweeney, C.; Metzger, S.
2017-12-01
It is estimated that agriculture contributes more than 40% of anthropogenic methane (CH4) emissions in North America. However, these estimates, which are either based on the Intergovernmental Panel on Climate Change (IPCC) methodology or inverse modeling techniques, are poorly validated due to the challenges of separating interspersed CH4 sources within agroecosystems. A flux aircraft, instrumented with a fast-response Picarro CH4 analyzer for the eddy covariance (EC) technique and a sampling system for the relaxed eddy accumulation technique (REA), was flown at an altitude of about 150 m along several 20-km transects over an agricultural region in Eastern Canada. For all flight days, the top-down CH4 flux density measurements were compared to the footprint adjusted bottom-up estimates based on an IPCC Tier II methodology. Information on the animal population, land use type and atmospheric and surface variables were available for each transect. Top-down and bottom-up estimates of CH4 emissions were found to be poorly correlated, and wetlands were the most frequent confounding source of CH4; however, there were other sources such as waste treatment plants and biodigesters. Spatially resolved wavelet covariance estimates of CH4 emissions helped identify the contribution of wetlands to the overall CH4 flux, and the dependence of these emissions on temperature. When wetland contribution in the flux footprint was minimized, top-down and bottom-up estimates agreed to within measurement error. This research demonstrates that although existing aircraft-based technology can be used to verify regional ( 100 km2) agricultural CH4 emissions, it remains challenging due to diverse sources of CH4 present in many regions. The use of wavelet covariance to generate spatially-resolved flux estimates was found to be the best way to separate interspersed sources of CH4.
Wave-Based Algorithms and Bounds for Target Support Estimation
2015-05-15
vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of
Methodology for National Water Savings Model and Spreadsheet Tool—Outdoor Water Use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alison, A; Chen, Yuting; Dunham, Camilla
This report describes the method Lawrence Berkeley National Laboratory (LBNL) developed to estimate national impacts of the U.S. Environmental Protection Agency’s (EPA’s) WaterSense labeling program for weather-based irrigation controllers (WBIC). Estimated impacts include the national water savings attributable to the program and the net present value of the lifetime water savings for consumers of irrigation controllers.
Assessment of undiscovered oil and gas resources of the North Sakhalin Basin Province, Russia, 2011
Klett, T.R.; Schenk, Christopher J.; Wandrey, Craig J.; Charpentier, Ronald R.; Brownfield, Michael E.; Pitman, Janet K.; Pollastro, Richard M.; Cook, Troy A.; Tennyson, Marilyn E.
2011-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated volumes of undiscovered, technically recoverable, conventional petroleum resources for the North Sakhalin Basin Province of Russia. The mean volumes were estimated at 5.3 billion barrels of crude oil, 43.8 trillion cubic feet of natural gas, and 0.8 billion barrels of natural gas liquids.
Low-Temperature Hydrothermal Resource Potential Estimate
Katherine Young
2016-06-30
Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
2009-12-01
Balanced Scorecard CAPM Capital Asset Pricing Model DIS Defense Information System DoD Department of...Measurement Tool (PMT) is the Balanced Scorecard (BSC) based on critical success factors and key performance indicators. The MND has referred to Jung’s...authors can replicate the methodology for multiple projects to generate a portfolio of projects. Similar to the Capital Asset Pricing Model ( CAPM ) or
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient
NASA Astrophysics Data System (ADS)
Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.
2018-04-01
The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.
An activity-based methodology for operations cost analysis
NASA Technical Reports Server (NTRS)
Korsmeyer, David; Bilby, Curt; Frizzell, R. A.
1991-01-01
This report describes an activity-based cost estimation method, proposed for the Space Exploration Initiative (SEI), as an alternative to NASA's traditional mass-based cost estimation method. A case study demonstrates how the activity-based cost estimation technique can be used to identify the operations that have a significant impact on costs over the life cycle of the SEI. The case study yielded an operations cost of $101 billion for the 20-year span of the lunar surface operations for the Option 5a program architecture. In addition, the results indicated that the support and training costs for the missions were the greatest contributors to the annual cost estimates. A cost-sensitivity analysis of the cultural and architectural drivers determined that the length of training and the amount of support associated with the ground support personnel for mission activities are the most significant cost contributors.
The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.
ERIC Educational Resources Information Center
Samers, Bernard N.; And Others
The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…
NASA Astrophysics Data System (ADS)
Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.
2009-07-01
Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
Campioli, M; Malhi, Y; Vicca, S; Luyssaert, S; Papale, D; Peñuelas, J; Reichstein, M; Migliavacca, M; Arain, M A; Janssens, I A
2016-12-14
The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO 2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM.
NASA Astrophysics Data System (ADS)
Campioli, M.; Malhi, Y.; Vicca, S.; Luyssaert, S.; Papale, D.; Peñuelas, J.; Reichstein, M.; Migliavacca, M.; Arain, M. A.; Janssens, I. A.
2016-12-01
The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM.
NASA Astrophysics Data System (ADS)
Klepikova, Maria V.; Le Borgne, Tanguy; Bour, Olivier; Davy, Philippe
2011-09-01
SummaryTemperature profiles in the subsurface are known to be sensitive to groundwater flow. Here we show that they are also strongly related to vertical flow in the boreholes themselves. Based on a numerical model of flow and heat transfer at the borehole scale, we propose a method to invert temperature measurements to derive borehole flow velocities. This method is applied to an experimental site in fractured crystalline rocks. Vertical flow velocities deduced from the inversion of temperature measurements are compared with direct heat-pulse flowmeter measurements showing a good agreement over two orders of magnitudes. Applying this methodology under ambient, single and cross-borehole pumping conditions allows us to estimate fracture hydraulic head and local transmissivity, as well as inter-borehole fracture connectivity. Thus, these results provide new insights on how to include temperature profiles in inverse problems for estimating hydraulic fracture properties.
Campioli, M.; Malhi, Y.; Vicca, S.; Luyssaert, S.; Papale, D.; Peñuelas, J.; Reichstein, M.; Migliavacca, M.; Arain, M. A.; Janssens, I. A.
2016-01-01
The eddy-covariance (EC) micro-meteorological technique and the ecology-based biometric methods (BM) are the primary methodologies to quantify CO2 exchange between terrestrial ecosystems and the atmosphere (net ecosystem production, NEP) and its two components, ecosystem respiration and gross primary production. Here we show that EC and BM provide different estimates of NEP, but comparable ecosystem respiration and gross primary production for forest ecosystems globally. Discrepancies between methods are not related to environmental or stand variables, but are consistently more pronounced for boreal forests where carbon fluxes are smaller. BM estimates are prone to underestimation of net primary production and overestimation of leaf respiration. EC biases are not apparent across sites, suggesting the effectiveness of standard post-processing procedures. Our results increase confidence in EC, show in which conditions EC and BM estimates can be integrated, and which methodological aspects can improve the convergence between EC and BM. PMID:27966534
Ye, Liu; Ni, Bing-Jie; Law, Yingyu; Byers, Craig; Yuan, Zhiguo
2014-01-01
The quantification of nitrous oxide (N2O) emissions from open-surface wastewater treatment systems with surface aerators is difficult as emissions from the surface aerator zone cannot be easily captured by floating hoods. In this study, we propose and demonstrate a novel methodology to estimate N2O emissions from such systems through determination of the N2O transfer coefficient (kLa) induced by surface aerators based on oxygen balance for the entire system. The methodology is demonstrated through its application to a full-scale open oxidation ditch wastewater treatment plant with surface aerators. The estimated kLa profile based on a month-long measurement campaign for oxygen balance, intensive monitoring of dissolved N2O profiles along the oxidation ditch over a period of four days, together with mathematical modelling, enabled to determine the N2O emission factor from this treatment plant (0.52 ± 0.16%). Majority of the N2O emission was found to occur in the surface aerator zone, which would be missed if the gas hood method was applied alone. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mansutti, Irene; Saiani, Luisa; Grassetti, Luca; Palese, Alvisa
2017-03-01
The clinical learning environment is fundamental to nursing education paths, capable of affecting learning processes and outcomes. Several instruments have been developed in nursing education, aimed at evaluating the quality of the clinical learning environments; however, no systematic review of the psychometric properties and methodological quality of these studies has been performed to date. The aims of the study were: 1) to identify validated instruments evaluating the clinical learning environments in nursing education; 2) to evaluate critically the methodological quality of the psychometric property estimation used; and 3) to compare psychometric properties across the instruments available. A systematic review of the literature (using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines) and an evaluation of the methodological quality of psychometric properties (using the COnsensus-based Standards for the selection of health Measurement INstruments guidelines). The Medline and CINAHL databases were searched. Eligible studies were those that satisfied the following criteria: a) validation studies of instruments evaluating the quality of clinical learning environments; b) in nursing education; c) published in English or Italian; d) before April 2016. The included studies were evaluated for the methodological quality of the psychometric properties measured and then compared in terms of both the psychometric properties and the methodological quality of the processes used. The search strategy yielded a total of 26 studies and eight clinical learning environment evaluation instruments. A variety of psychometric properties have been estimated for each instrument, with differing qualities in the methodology used. Concept and construct validity were poorly assessed in terms of their significance and rarely judged by the target population (nursing students). Some properties were rarely considered (e.g., reliability, measurement error, criterion validity), whereas others were frequently estimated, but using different coefficients and statistical analyses (e.g., internal consistency, structural validity), thus rendering comparison across instruments difficult. Moreover, the methodological quality adopted in the property assessments was poor or fair in most studies, compromising the goodness of the psychometric values estimated. Clinical learning placements represent the key strategies in educating the future nursing workforce: instruments evaluating the quality of the settings, as well as their capacity to promote significant learning, are strongly recommended. Studies estimating psychometric properties, using an increased quality of research methodologies are needed in order to support nursing educators in the process of clinical placements accreditation and quality improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.
HIV, HCV, HBV, and syphilis among transgender women from Brazil
Bastos, Francisco I.; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; dos Santos, Camila Mattos; Malta, Monica Siqueira
2018-01-01
Abstract Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates. In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects. We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates. PMID:29794601
Bastos, Francisco I; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; Dos Santos, Camila Mattos; Malta, Monica Siqueira
2018-05-01
Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates.In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects.We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates.
Methodology to Estimate the Quantity, Composition, and ...
This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure. This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure.
Using scan statistics for congenital anomalies surveillance: the EUROCAT methodology.
Teljeur, Conor; Kelly, Alan; Loane, Maria; Densem, James; Dolk, Helen
2015-11-01
Scan statistics have been used extensively to identify temporal clusters of health events. We describe the temporal cluster detection methodology adopted by the EUROCAT (European Surveillance of Congenital Anomalies) monitoring system. Since 2001, EUROCAT has implemented variable window width scan statistic for detecting unusual temporal aggregations of congenital anomaly cases. The scan windows are based on numbers of cases rather than being defined by time. The methodology is imbedded in the EUROCAT Central Database for annual application to centrally held registry data. The methodology was incrementally adapted to improve the utility and to address statistical issues. Simulation exercises were used to determine the power of the methodology to identify periods of raised risk (of 1-18 months). In order to operationalize the scan methodology, a number of adaptations were needed, including: estimating date of conception as unit of time; deciding the maximum length (in time) and recency of clusters of interest; reporting of multiple and overlapping significant clusters; replacing the Monte Carlo simulation with a lookup table to reduce computation time; and placing a threshold on underlying population change and estimating the false positive rate by simulation. Exploration of power found that raised risk periods lasting 1 month are unlikely to be detected except when the relative risk and case counts are high. The variable window width scan statistic is a useful tool for the surveillance of congenital anomalies. Numerous adaptations have improved the utility of the original methodology in the context of temporal cluster detection in congenital anomalies.
On a Chirplet Transform Based Method for Co-channel Voice Separation
NASA Astrophysics Data System (ADS)
Dugnol, B.; Fernández, C.; Galiano, G.; Velasco, J.
We use signal and image theory based algorithms to produce estimations of the number of wolves emitting howls or barks in a given field recording as an individuals counting alternative to the traditional trace collecting methodologies. We proceed in two steps. Firstly, we clean and enhance the signal by using PDE based image processing algorithms applied to the signal spectrogram. Secondly, assuming that the wolves chorus may be modelled as an addition of nonlinear chirps, we use the quadratic energy distribution corresponding to the Chirplet Transform of the signal to produce estimates of the corresponding instantaneous frequencies, chirp-rates and amplitudes at each instant of the recording. We finally establish suitable criteria to decide how such estimates are connected in time.
Delgado, J; Liao, J C
1992-01-01
The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
Calibration methodology for proportional counters applied to yield measurements of a neutron burst.
Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo
2014-01-01
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.
Garg, Harish
2013-03-01
The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.
A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined formore » data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.« less
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
Martelli, C M; de Andrade, A L; das Dores, D; Cardoso, P; Almeida e Silva, S; Zicker, F
1991-02-01
Between October 1988 and February 1989, 1,033 voluntary first-time blood donors were screened for hepatitis B infection in five blood banks in Goiâna, Central Brazil. The survey was part of a major study designed to estimate seroprevalence of HBsAg and anti-HBs and to discuss methodological issues related to prevalence estimation based on data from blood banks. Donors were interviewed and blood samples were collected and tested for HBsAg and anti-HBs by ELISA tests. Prevalences of 1.9% and 10.9% were obtained for HBsAg and anti-HBs, respectively, and no statistical difference was found between the sexes. Prevalence of anti-HBs increased with age (X2 for trend = 7.9 p = 0.004). The positive predictive value and sensitivity of history of jaundice or hepatitis reported in the interview in detecting seropositives were 13.6% and 2.2%, respectively. The methodological issues, including internal and external validity of HBV prevalence estimated among blood donors are discussed. The potential usefulness of blood banks as a source of morbidity information for surveillance for Hepatitis B virus infection is stressed.
NASA Astrophysics Data System (ADS)
Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.
2015-12-01
Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.
Estimating lifetime and age-conditional probabilities of developing cancer.
Wun, L M; Merrill, R M; Feuer, E J
1998-01-01
Lifetime and age-conditional risk estimates of developing cancer provide a useful summary to the public of the current cancer risk and how this risk compares with earlier periods and among select subgroups of society. These reported estimates, commonly quoted in the popular press, have the potential to promote early detection efforts, to increase cancer awareness, and to serve as an aid in study planning. However, they can also be easily misunderstood and frightening to the general public. The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute and the American Cancer Society have recently begun including in annual reports lifetime and age-conditional risk estimates of developing cancer. These risk estimates are based on incidence rates that reflect new cases of the cancer in a population free of the cancer. To compute these estimates involves a cancer prevalence adjustment that is computed cross-sectionally from current incidence and mortality data derived within a multiple decrement life table. This paper presents a detailed description of the methodology for deriving lifetime and age-conditional risk estimates of developing cancer. In addition, an extension is made which, using a triple decrement life table, adjusts for a surgical procedure that removes individuals from the risk of developing a given cancer. Two important results which provide insights into the basic methodology are included in the discussion. First, the lifetime risk estimate does not depend on the cancer prevalence adjustment, although this is not the case for age-conditional risk estimates. Second, the lifetime risk estimate is always smaller when it is corrected for a surgical procedure that takes people out of the risk pool to develop the cancer. The methodology is applied to corpus and uterus NOS cancers, with a correction made for hysterectomy prevalence. The interpretation and limitations of risk estimates are also discussed.
Development of estimation methodology for bicycle and pedestrian volumes based on existing counts.
DOT National Transportation Integrated Search
2013-10-01
The Colorado Department of Transportation (CDOT) adopted the Bicycle and Pedestrian Policy directive in 2009 : stating that "...the needs of bicyclists and pedestrians shall be included in the planning, design, and operation of : transportation facil...
Assessment of continuous oil and gas resources of the Cooper Basin, Australia, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Klett, Timothy R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.
2016-07-15
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean continuous resources of 482 million barrels of oil and 29.8 trillion cubic feet of gas in the Cooper Basin of Australia.
Assessment of Paleozoic shale gas resources in the Sichuan Basin of China, 2015
Potter, Christopher J.; Schenk, Christopher J.; Charpentier, Ronald R.; Gaswirth, Stephanie B.; Klett, Timothy R.; Leathers, Heidi M.; Brownfield, Michael E.; Mercier, Tracey J.; Tennyson, Marilyn E.; Pitman, Janet K.
2015-10-14
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a mean of 23.9 trillion cubic feet of technically recoverable shale gas resources in Paleozoic formations in the Sichuan Basin of China.
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Klett, Timothy R.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-02-21
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 5.27 trillion cubic feet of coalbed gas in the Karoo Basin Province.
Uncertainties in Emissions In Emissions Inputs for Near-Road Assessments
Emissions, travel demand, and dispersion models are all needed to obtain temporally and spatially resolved pollutant concentrations. Current methodology combines these three models in a bottom-up approach based on hourly traffic and emissions estimates, and hourly dispersion conc...
Regional Earthquake Shaking and Loss Estimation
NASA Astrophysics Data System (ADS)
Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.
Predicting boundary shear stress and sediment transport over bed forms
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, J.
Based on a compilation of three estimation approaches, the total nationwide population of wild pigs in the United States numbers approximately 6.3 million animals, with that total estimate ranging from 4.4 up to 11.3 million animals. The majority of these numbers (99 percent), which were encompassed by ten states (i.e., Alabama, Arkansas, California, Florida, Georgia, Louisiana, Mississippi, Oklahoma, South Carolina and Texas), were based on defined estimation methodologies (e.g., density estimates correlated to the total potential suitable wild pig habitat statewide, statewide harvest percentages, statewide agency surveys regarding wild pig distribution and numbers). In contrast to the pre-1990 estimates, nonemore » of these more recent efforts, collectively encompassing 99 percent of the total, were based solely on anecdotal information or speculation. To that end, one can defensibly state that the wild pigs found in the United States number in the millions of animals, with the nationwide population estimated to arguably vary from about four million up to about eleven million individuals.« less
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
1975-06-01
the Air Force Flight Dynamics Laboratory for use in conceptual and preliminary designs pauses of weapon system development. The methods are a...trade study method provides ai\\ iterative capability stemming from a direct interface with design synthesis programs. A detailed cost data base ;ind...system for data expmjsion is provided. The methods are designed for ease in changing cost estimating relationships and estimating coefficients
Satellite-based detection of global urban heat-island temperature influence
Gallo, K.P.; Adegoke, Jimmy O.; Owen, T.W.; Elvidge, C.D.
2002-01-01
This study utilizes a satellite-based methodology to assess the urban heat-island influence during warm season months for over 4400 stations included in the Global Historical Climatology Network of climate stations. The methodology includes local and regional satellite retrievals of an indicator of the presence green photosynthetically active vegetation at and around the stations. The difference in local and regional samples of the normalized difference vegetation index (NDVI) is used to estimate differences in mean air temperature. Stations classified as urban averaged 0.90??C (N. Hemisphere) and 0.92??C (S. Hemisphere) warmer than the surrounding environment on the basis of the NDVI-derived temperature estimates. Additionally, stations classified as rural averaged 0.19??C (N. Hemisphere) and 0.16??C (S. Hemisphere) warmer than the surrounding environment. The NDVI-derived temperature estimates were found to be in reasonable agreement with temperature differences observed between climate stations. The results suggest that satellite-derived data sets can be used to estimate the urban heat-island temperature influence on a global basis and that a more detailed analysis of rural stations and their surrounding environment may be necessary to assure that temperature trends derived from assumed rural environments are not influenced by changes in land use/land cover. Copyright 2002 by the American Geophysical Union.
A Methodology for the Estimation of the Wind Generator Economic Efficiency
NASA Astrophysics Data System (ADS)
Zaleskis, G.
2017-12-01
Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.
Top Level Space Cost Methodology (TLSCM)
1997-12-02
Software 7 6. ACEIT . 7 C. Ground Rules and Assumptions 7 D. Typical Life Cycle Cost Distribution 7 E. Methodologies 7 1. Cost/budget Threshold 9 2. Analogy...which is based on real-time Air Force and space programs. Ref.(25:2- 8, 2-9) 6. ACEIT : Automated Cost Estimating Integrated Tools( ACEIT ), Tecolote...Research, Inc. There is a way to use the ACEIT cost program to get a print-out of an expanded WBS. Therefore, find someone that has ACEIT experience and
Bennett, R; Christiansen, K; Clifton-Hadley, R
1999-04-09
Many 'economic' studies of livestock diseases in Great Britain have been carried out over time. Most studies have considered just one or two diseases and used a different methodology and valuation base from other studies, hampering any comparative assessment of the economic impact of diseases. A standardized methodology was applied to the estimation of the direct costs to livestock production of some 30 endemic diseases/conditions of farm animals in Great Britain. This involved identification of the livestock populations at risk, estimation of the annual incidence of each disease in these populations, identification of the range and incidence of physical effects of each disease on production, valuation of the physical effects of each disease and estimation of the financial value of output losses/resource wastage due to a disease and the costs of specific treatment and prevention measures. The wider economic impacts of disease (such as the implications for human health, animal welfare and markets) were not included in the assessments. Using this standardized methodology with common financial values, a simple spreadsheet model was constructed for each disease. Given the paucity of appropriate disease data for economic assessment, 'low' and 'high' values were used to reflect uncertainties surrounding key disease parameters. Preliminary estimates of the value of disease output losses/resource wastage, treatment and prevention costs are presented for each disease. Despite the limitations of the spreadsheet models and of the estimates derived from them, we conclude that the models represent a useful start in developing a system for the comparative economic assessment of livestock diseases in Great Britain.
A multifractal approach to space-filling recovery for PET quantification.
Willaime, Julien M Y; Aboagye, Eric O; Tsoumpas, Charalampos; Turkheimer, Federico E
2014-11-01
A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV mean) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic (18)F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical (18)F-fluorothymidine PET test-retest dataset. TLA estimates were stable for a range of resolutions typical in PET oncology (4-6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV mean or TV measurements across imaging protocols. The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.
ADVISORY ON UPDATED METHODOLOGY FOR ...
The National Academy of Sciences (NAS) published the Biological Effects of Ionizing Radiation (BEIR) committee's report (BEIR VII) on risks from ionizing radiation exposures in 2006. The Committee analyzed the most recent epidemiology from the important exposed cohorts and factored in changes resulting from the updated analysis of dosimetry for the Japanese atomic bomb survivors. To the extent practical, the Committee also considered relevant radiobiological data, including that from the Department of Energy's low dose effects research program. Based on the review of this information, the Committee proposed a set of models for estimating risks from low-dose ionizing radiation. ORIA then prepared a white paper revising the Agency's methodology for estimating cancer risks from exposure to ionizing radiation in light of this report and other relevant information. This is the first product to be developed as a result of the BEIR VII report. We requested that the SAB conduct an advisory during the development of this methodology. The second product to be prepared will be a revised version of the document,
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
USDA-ARS?s Scientific Manuscript database
A retrieval of soil moisture is proposed using surface flux estimates from satellite-based thermal infrared (TIR) imagery and the Atmosphere-Land-Exchange-Inversion (ALEXI) model. The ability of ALEXI to provide valuable information about the partitioning of the surface energy budget, which can be l...
PMP Estimations at Sparsely Controlled Andinian Basins and Climate Change Projections
NASA Astrophysics Data System (ADS)
Lagos Zúñiga, M. A.; Vargas, X.
2012-12-01
Probable Maximum Precipitation (PMP) estimation implies an extensive review of hydrometeorological data and understandig of precipitation formation processes. There exists different methodology processes that apply for their estimations and all of them require a good spatial and temporal representation of storms. The estimation of hydrometeorological PMP on sparsely controlled basins is a difficult task, specially if the studied area has an important orographic effect due to mountains and the mixed precipitation occurrence in the most several storms time period, the main task of this study is to propose and estimate PMP in a sparsely controlled basin, affected by abrupt topography and mixed hidrology basin; also analyzing statystic uncertainties estimations and possible climate changes effects in its estimation. In this study the PMP estimation under statistical and hydrometeorological aproaches (watershed-based and traditional depth area duration analysis) was done in a semi arid zone at Puclaro dam in north Chile. Due to the lack of good spatial meteorological representation at the study zone, we propose a methodology to consider the orographic effects of Los Andes due to orographic effects patterns based in a RCM PRECIS-DGF and annual isoyetal maps. Estimations were validated with precipitation patterns for given winters, considering snow route and rainfall gauges at the preferencial wind direction, finding good results. The estimations are also compared with the highest areal storms in USA, Australia, India and China and with frequency analysis in local rain gauge stations in order to decide about the most adequate approach for the study zone. Climate change projections were evaluated with ECHAM5 GCM model, due to its good quality representation in the seasonality and the magnitude of meteorological variables. Temperature projections, for 2040-2065 period, show that there would be a rise in the catchment contributing area that would lead to an increase of the average liquid precipitation over the basin. Temperature projections would also affect the maximization factors in the calculation of the PMP, increasing it up to 126.6% and 62.5% in scenarios A2 and B1, respectively. These projections are important to be studied due to the implications of PMP in hydrologic design of great hydraulic works as Probable Maximum Flood (PMF). We propose that the methodology presented in this study could be also used in other basins of similar characteristics.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
A model for the sustainable selection of building envelope assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huedo, Patricia, E-mail: huedo@uji.es; Mulet, Elena, E-mail: emulet@uji.es; López-Mesa, Belinda, E-mail: belinda@unizar.es
2016-02-15
The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate themore » impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.« less
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Use of a land-use-based emissions inventory in delineating clean-air zones
Victor S. Fahrer; Howard A. Peters
1977-01-01
Use of a land-use-based emissions inventory from which air-pollution estimates can be projected was studied. First the methodology used to establish a land-use-based emission inventory is described. Then this inventory is used as input in a simple model that delineates clean air and buffer zones. The model is applied to the town of Burlington, Massachusetts....
Methodologies for Estimating Cumulative Human Exposures to Current-Use Pyrethroid Pesticides
We estimated cumulative residential pesticide exposures for a group of nine young children (4–6 years) using three different methodologies developed by the US Environmental Protection Agency and compared the results with estimates derived from measured urinary metabolite concentr...
CONCEPTUAL DESIGNS FOR A NEW HIGHWAY VEHICLE EMISSIONS ESTIMATION METHODOLOGY
The report discusses six conceptual designs for a new highway vehicle emissions estimation methodology and summarizes the recommendations of each design for improving the emissions and activity factors in the emissions estimation process. he complete design reports are included a...
Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.
Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654
Ares I-X Best Estimated Trajectory Analysis and Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Assessment of shale-gas resources of the Karoo Province, South Africa and Lesotho, Africa, 2016
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Pitman, Janet K.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Mercier, Tracey J.; Finn, Thomas M.
2016-07-08
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resource of 44.5 trillion cubic feet of shale gas in the Karoo Province of South Africa and Lesotho, Africa.
Assessment of potential shale-oil and shale-gas resources in Silurian shales of Jordan, 2014
Schenk, Christopher J.; Pitman, Janet K.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Nelson, Philip H.; Brownfield, Michael E.; Pawlewicz, Mark J.; Wandrey, Craig J.
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 11 million barrels of potential shale-oil and 320 billion cubic feet of shale-gas resources in Silurian shales of Jordan.
Assessment of undiscovered, technically recoverable oil and gas resources of Armenia, 2014
Klett, Timothy R.; Schenk, Christopher J.; Wandrey, Craig J.; Brownfield, Michael E.; Charpentier, Ronald R.; Tennyson, Marilyn E.; Gautier, Donald L.
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean volumes of 1 million barrels of undiscovered, technically recoverable conventional oil and 6 billion cubic feet of undiscovered, technically recoverable conventional natural gas in Armenia.
Assessment of Undiscovered Oil and Gas Resources of the Red Sea Basin Province
,
2010-01-01
The U.S. Geological Survey estimated mean volumes of 5 billion barrels of undiscovered technically recoverable oil and 112 trillion cubic feet of recoverable gas in the Red Sea Basin Province using a geology-based assessment methodology.
Assessment of undiscovered oil and gas resources in the Cuyo Basin Province, Argentina, 2017
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Le, Phuong A.; Mercier, Tracey J.; Finn, Thomas M.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-07-18
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 236 million barrels of oil and 112 billion cubic feet of associated gas in the Cuyo Basin Province, Argentina.
Assessment of undiscovered oil and gas resources in the Lower Indus Basin, Pakistan, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.
2017-09-19
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 164 million barrels of oil and 24.6 trillion cubic feet of gas in the Lower Indus Basin, Pakistan.
Assessment of undiscovered oil and gas resources in the North-Central Montana Province, 2017
Schenk, Christopher J.; Mercier, Tracey J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Finn, Thomas M.; Pitman, Janet K.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-02-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 55 million barrels of oil and 846 billion cubic feet of gas in the North-Central Montana Province.
Assessment of undiscovered continuous oil and gas resources in the Hanoi Trough, Vietnam, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Le, Phuong A.; Klett, Timothy R.; Finn, Thomas M.; Leathers-Miller, Heidi M.; Gaswirth, Stephanie B.; Marra, Kristen R.
2018-02-13
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 52 million barrels of oil and 591 billion cubic feet of gas in the Hanoi Trough of Vietnam.
The activity-based methodology to assess ship emissions - A review.
Nunes, R A O; Alvim-Ferraz, M C M; Martins, F G; Sousa, S I V
2017-12-01
Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories for an effective environmental policy plan. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Baker, T. C. (Principal Investigator)
1982-01-01
A general methodology is presented for estimating a stratum's at-harvest crop acreage proportion for a given crop year (target year) from the crop's estimated acreage proportion for sample segments from within the stratum. Sample segments from crop years other than the target year are (usually) required for use in conjunction with those from the target year. In addition, the stratum's (identifiable) crop acreage proportion may be estimated for times other than at-harvest in some situations. A by-product of the procedure is a methodology for estimating the change in the stratum's at-harvest crop acreage proportion from crop year to crop year. An implementation of the proposed procedure as a statistical analysis system routine using the system's matrix language module, PROC MATRIX, is described and documented. Three examples illustrating use of the methodology and algorithm are provided.
Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions
NASA Astrophysics Data System (ADS)
De Risi, Raffaele; Goda, Katsuichiro
2017-08-01
Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.
Estimating Children's Soil/Dust Ingestion Rates through ...
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du
ELER software - a new tool for urban earthquake loss assessment
NASA Astrophysics Data System (ADS)
Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.
2010-12-01
Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and ATC-55 (Yang, 2005). An urban loss assessment exercise for a scenario earthquake for the city of Istanbul is conducted and physical and social losses are presented. Damage to the urban environment is compared to the results obtained from similar software, i.e. KOERILoss (KOERI, 2002) and DBELA (Crowley et al., 2004). The European rapid loss estimation tool is expected to help enable effective emergency response, on both local and global level, as well as public information.
Accounting for the drug life cycle and future drug prices in cost-effectiveness analysis.
Hoyle, Martin
2011-01-01
Economic evaluations of health technologies typically assume constant real drug prices and model only the cohort of patients currently eligible for treatment. It has recently been suggested that, in the UK, we should assume that real drug prices decrease at 4% per annum and, in New Zealand, that real drug prices decrease at 2% per annum and at patent expiry the drug price falls. It has also recently been suggested that we should model multiple future incident cohorts. In this article, the cost effectiveness of drugs is modelled based on these ideas. Algebraic expressions are developed to capture all costs and benefits over the entire life cycle of a new drug. The lifetime of a new drug in the UK, a key model parameter, is estimated as 33 years, based on the historical lifetime of drugs in England over the last 27 years. Under the proposed methodology, cost effectiveness is calculated for seven new drugs recently appraised in the UK. Cost effectiveness as assessed in the future is also estimated. Whilst the article is framed in mathematics, the findings and recommendations are also explained in non-mathematical language. The 'life-cycle correction factor' is introduced, which is used to convert estimates of cost effectiveness as traditionally calculated into estimates under the proposed methodology. Under the proposed methodology, all seven drugs appear far more cost effective in the UK than published. For example, the incremental cost-effectiveness ratio decreases by 46%, from £61, 900 to £33, 500 per QALY, for cinacalcet versus best supportive care for end-stage renal disease, and by 45%, from £31,100 to £17,000 per QALY, for imatinib versus interferon-α for chronic myeloid leukaemia. Assuming real drug prices decrease over time, the chance that a drug is publicly funded increases over time, and is greater when modelling multiple cohorts than with a single cohort. Using the methodology (compared with traditional methodology) all drugs in the UK and New Zealand are predicted to be more cost effective. It is suggested that the willingness-to-pay threshold should be reduced in the UK and New Zealand. The ranking of cost effectiveness will change with drugs assessed as relatively more cost effective and medical devices and surgical procedures relatively less cost effective than previously thought. The methodology is very simple to implement. It is suggested that the model should be parameterized for other countries.
Crude and intrinsic birth rates for Asian countries.
Rele, J R
1978-01-01
An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.
Nonlinear data assimilation: towards a prediction of the solar cycle
NASA Astrophysics Data System (ADS)
Svedin, Andreas
The solar cycle is the cyclic variation of solar activity, with a span of 9-14 years. The prediction of the solar cycle is an important and unsolved problem with implications for communications, aviation and other aspects of our high-tech society. Our interest is model-based prediction, and we present a self-consistent procedure for parameter estimation and model state estimation, even when only one of several model variables can be observed. Data assimilation is the art of comparing, combining and transferring observed data into a mathematical model or computer simulation. We use the 3DVAR methodology, based on the notion of least squares, to present an implementation of a traditional data assimilation. Using the Shadowing Filter — a recently developed method for nonlinear data assimilation — we outline a path towards model based prediction of the solar cycle. To achieve this end we solve a number of methodological challenges related to unobserved variables. We also provide a new framework for interpretation that can guide future predictions of the Sun and other astrophysical objects.
Tidal current energy potential of Nalón river estuary assessment using a high precision flow model
NASA Astrophysics Data System (ADS)
Badano, Nicolás; Valdés, Rodolfo Espina; Álvarez, Eduardo Álvarez
2018-05-01
Obtaining energy from tide currents in onshore locations is of great interest due to the proximity to the points of consumption. This opens the door to the feasibility of new installations based on hydrokinetic microturbines even in zones of moderate speed. In this context, the accuracy of energy predictions based on hydrodynamic models is of paramount importance. This research presents a high precision methodology based on a multidimensional hydrodynamic model that is used to study the energetic potential in estuaries. Moreover, it is able to estimate the flow variations caused by microturbine installations. The paper also shows the results obtained from the application of the methodology in a study of the Nalón river mouth (Asturias, Spain).
Selected Tether Applications Cost Model
NASA Technical Reports Server (NTRS)
Keeley, Michael G.
1988-01-01
Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.
Space Tug Docking Study. Volume 5: Cost Analysis
NASA Technical Reports Server (NTRS)
1976-01-01
The cost methodology, summary cost data, resulting cost estimates by Work Breakdown Structure (WBS), technical characteristics data, program funding schedules and the WBS for the costing are discussed. Cost estimates for two tasks of the study are reported. The first, developed cost estimates for design, development, test and evaluation (DDT&E) and theoretical first unit (TFU) at the component level (Level 7) for all items reported in the data base. Task B developed total subsystem DDT&E costs and funding schedules for the three candidate Rendezvous and Docking Systems: manual, autonomous, and hybrid.
Iraq War mortality estimates: a systematic review.
Tapp, Christine; Burkle, Frederick M; Wilson, Kumanan; Takaro, Tim; Guyatt, Gordon H; Amad, Hani; Mills, Edward J
2008-03-07
In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review the studies, reports and counts on Iraqi deaths since the start of the war and assessed their methodological quality and results. We performed a systematic search of 15 electronic databases from inception to January 2008. In addition, we conducted a non-structured search of 3 other databases, reviewed study reference lists and contacted subject matter experts. We included studies that provided estimates of Iraqi deaths based on primary research over a reported period of time since the invasion. We excluded studies that summarized mortality estimates and combined non-fatal injuries and also studies of specific sub-populations, e.g. under-5 mortality. We calculated crude and cause-specific mortality rates attributable to violence and average deaths per day for each study, where not already provided. Thirteen studies met the eligibility criteria. The studies used a wide range of methodologies, varying from sentinel-data collection to population-based surveys. Studies assessed as the highest quality, those using population-based methods, yielded the highest estimates. Average deaths per day ranged from 48 to 759. The cause-specific mortality rates attributable to violence ranged from 0.64 to 10.25 per 1,000 per year. Our review indicates that, despite varying estimates, the mortality burden of the war and its sequelae on Iraq is large. The use of established epidemiological methods is rare. This review illustrates the pressing need to promote sound epidemiologic approaches to determining mortality estimates and to establish guidelines for policy-makers, the media and the public on how to interpret these estimates.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345
Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator
NASA Astrophysics Data System (ADS)
Oucheriah, Said
2017-11-01
In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
Walker, Kate; Seaman, Shaun R; De Angelis, Daniela; Presanis, Anne M; Dodds, Julie P; Johnson, Anne M; Mercey, Danielle; Gill, O Noel; Copas, Andrew J
2011-10-01
Hard-to-reach population subgroups are typically investigated using convenience sampling, which may give biased estimates. Combining information from such surveys, a probability survey and clinic surveillance, can potentially minimize the bias. We developed a methodology to estimate the prevalence of undiagnosed HIV infection among men who have sex with men (MSM) in England and Wales aged 16-44 years in 2003, making fuller use of the available data than earlier work. We performed a synthesis of three data sources: genitourinary medicine clinic surveillance (11 380 tests), a venue-based convenience survey including anonymous HIV testing (3702 MSM) and a general population sexual behaviour survey (134 MSM). A logistic regression model to predict undiagnosed infection was fitted to the convenience survey data and then applied to the MSMs in the population survey to estimate the prevalence of undiagnosed infection in the general MSM population. This estimate was corrected for selection biases in the convenience survey using clinic surveillance data. A sensitivity analysis addressed uncertainty in our assumptions. The estimated prevalence of undiagnosed HIV in MSM was 2.4% [95% confidence interval (95% CI 1.7-3.0%)], and between 1.6% (95% CI 1.1-2.0%) and 3.3% (95% CI 2.4-4.1%) depending on assumptions; corresponding to 5500 (3390-7180), 3610 (2180-4740) and 7570 (4790-9840) men, and undiagnosed fractions of 33, 24 and 40%, respectively. Our estimates are consistent with earlier work that did not make full use of data sources. Reconciling data from multiple sources, including probability-, clinic- and venue-based convenience samples can reduce bias in estimates. This methodology could be applied in other settings to take full advantage of multiple imperfect data sources.
Akkaya-Hocagil, Tugba; Hsu, Wan-Hsiang; Sommerhalter, Kristin; McGarry, Claire; Van Zutphen, Alissa
2017-11-01
Congenital heart defects (CHDs) are the most common birth defects in the United States, and the population of individuals living with CHDs is growing. Though CHD prevalence in infancy has been well characterized, better prevalence estimates among children and adolescents in the United States are still needed. We used capture-recapture methods to estimate CHD prevalence among adolescents residing in 11 New York counties. The three data sources used for analysis included Statewide Planning and Research Cooperative System (SPARCS) hospital inpatient records, SPARCS outpatient records, and medical records provided by seven pediatric congenital cardiac clinics from 2008 to 2010. Bayesian log-linear models were fit using the R package Conting to account for dataset dependencies and heterogeneous catchability. A total of 2537 adolescent CHD cases were captured in our three data sources. Forty-four cases were identified in all data sources, 283 cases were identified in two of three data sources, and 2210 cases were identified in a single data source. The final model yielded an estimated total adolescent CHD population of 3845, indicating that 66% of the cases in the catchment area were identified in the case-identifying data sources. Based on 2010 Census estimates, we estimated adolescent CHD prevalence as 6.4 CHD cases per 1000 adolescents (95% confidence interval: 6.2-6.6). We used capture-recapture methodology with a population-based surveillance system in New York to estimate CHD prevalence among adolescents. Future research incorporating additional data sources may improve prevalence estimates in this population. Birth Defects Research 109:1423-1429, 2017.© 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-04-01
System identification has been employed in numerous structural health monitoring (SHM) applications. Traditional system identification methods usually rely on centralized processing of structural response data to extract information on structural parameters. However, in wireless SHM systems the centralized processing of structural response data introduces a significant communication bottleneck. Exploiting the merits of decentralization and on-board processing power of wireless SHM systems, many system identification methods have been successfully implemented in wireless sensor networks. While several system identification approaches for wireless SHM systems have been proposed, little attention has been paid to obtaining information on the physical parameters (e.g. stiffness, damping) of the monitored structure. This paper presents a hybrid system identification methodology suitable for wireless sensor networks based on the principles of component mode synthesis (dynamic substructuring). A numerical model of the monitored structure is embedded into the wireless sensor nodes in a distributed manner, i.e. the entire model is segmented into sub-models, each embedded into one sensor node corresponding to the substructure the sensor node is assigned to. The parameters of each sub-model are estimated by extracting local mode shapes and by applying the equations of the Craig-Bampton method on dynamic substructuring. The proposed methodology is validated in a laboratory test conducted on a four-story frame structure to demonstrate the ability of the methodology to yield accurate estimates of stiffness parameters. Finally, the test results are discussed and an outlook on future research directions is provided.
Using the Delphi technique in economic evaluation: time to revisit the oracle?
Simoens, S
2006-12-01
Although the Delphi technique has been commonly used as a data source in medical and health services research, its application in economic evaluation of medicines has been more limited. The aim of this study was to describe the methodology of the Delphi technique, to present a case for using the technique in economic evaluation, and to provide recommendations to improve such use. The literature was accessed through MEDLINE focusing on studies discussing the methodology of the Delphi technique and economic evaluations of medicines using the Delphi technique. The Delphi technique can be used to provide estimates of health care resources required and to modify such estimates when making inter-country comparisons. The Delphi technique can also contribute to mapping the treatment process under investigation, to identifying the appropriate comparator to be used, and to ensuring that the economic evaluation estimates cost-effectiveness rather than cost-efficacy. Ideally, economic evaluations of medicines should be based on real-patient data. In the absence of such data, evaluations need to incorporate the best evidence available by employing approaches such as the Delphi technique. Evaluations based on this approach should state the limitations, and explore the impact of the associated uncertainty in the results.
Evaluation of arterial propagation velocity based on the automated analysis of the Pulse Wave Shape
NASA Astrophysics Data System (ADS)
Clara, F. M.; Scandurra, A. G.; Meschino, G. J.; Passoni, L. I.
2011-12-01
This paper proposes the automatic estimation of the arterial propagation velocity from the pulse wave raw records measured in the region of the radial artery. A fully automatic process is proposed to select and analyze typical pulse cycles from the raw data. An adaptive neuro-fuzzy inference system, together with a heuristic search is used to find a functional approximation of the pulse wave. The estimation of the propagation velocity is carried out via the analysis of the functional approximation obtained with the fuzzy model. The analysis of the pulse wave records with the proposed methodology showed small differences compared with the method used so far, based on a strong interaction with the user. To evaluate the proposed methodology, we estimated the propagation velocity in a population of healthy men from a wide range of ages. It has been found in these studies that propagation velocity increases linearly with age and it presents a considerable dispersion of values in healthy individuals. We conclude that this process could be used to evaluate indirectly the propagation velocity of the aorta, which is related to physiological age in healthy individuals and with the expectation of life in cardiovascular patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler
Battery Life estimation is one of the key inputs required for Hybrid applications for all GM Hybrid/EV/EREV/PHEV programs. For each Hybrid vehicle program, GM has instituted multi-parameter Design of Experiments generating test data at Cell level and also Pack level on a reduced basis. Based on experience, generating test data on a pack level is found to be very expensive, resource intensive and sometimes less reliable. The proposed collaborative project will focus on a methodology to estimate Battery life based on cell degradation data combined with pack thermal modeling. NREL has previously developed cell-level battery aging models and pack-level thermal/electricalmore » network models, though these models are currently not integrated. When coupled together, the models are expected to describe pack-level thermal and aging response of individual cells. GM and NREL will use data collected for GM's Bas+ battery system for evaluation of the proposed methodology and assess to what degree these models can replace pack-level aging experiments in the future.« less
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Risk assessment in the 21st century: roadmap and matrix.
Embry, Michelle R; Bachman, Ammie N; Bell, David R; Boobis, Alan R; Cohen, Samuel M; Dellarco, Michael; Dewhurst, Ian C; Doerrer, Nancy G; Hines, Ronald N; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard D; Rowlands, J Craig; Tanir, Jennifer Y; Wolf, Douglas C; Doe, John E
2014-08-01
Abstract The RISK21 integrated evaluation strategy is a problem formulation-based exposure-driven risk assessment roadmap that takes advantage of existing information to graphically represent the intersection of exposure and toxicity data on a highly visual matrix. This paper describes in detail the process for using the roadmap and matrix. The purpose of this methodology is to optimize the use of prior information and testing resources (animals, time, facilities, and personnel) to efficiently and transparently reach a risk and/or safety determination. Based on the particular problem, exposure and toxicity data should have sufficient precision to make such a decision. Estimates of exposure and toxicity, bounded by variability and/or uncertainty, are plotted on the X- and Y-axes of the RISK21 matrix, respectively. The resulting intersection is a highly visual representation of estimated risk. Decisions can then be made to increase precision in the exposure or toxicity estimates or declare that the available information is sufficient. RISK21 represents a step forward in the goal to introduce new methodologies into 21st century risk assessment. Indeed, because of its transparent and visual process, RISK21 has the potential to widen the scope of risk communication beyond those with technical expertise.
Cost Performance Estimating Relationships for Hybrid Electric Vehicle Components
2003-07-31
Permanent magnet motors are more likely to be used as generators, while AC induction motors are more efficiently used as motors. Inverters/controllers can...than permanent magnet motors . Switched Reluctance motors are also used on hybrid electric vehicles, but are not used as widely as either AC...induction or permanent magnet motors , and are not analyzed here. Methodology The motor estimates are based on power, with kilowatts being the unit of
Strength Property Estimation for Dry, Cohesionless Soils Using the Military Cone Penetrometer
1992-05-01
by Meier and Baladi (1988). Their methodology is based on a theoretical formulation of the CI problem using cavity expansion theory to relate cone... Baladi (1981), incorporates three mechanical properties (cohesion, fric- tion angle, and shear modulus) and the total unit weight. Obviously, these four...unknown soil propertieE cannot be back-calculated directly from a single CI measurement. To ameliorate this problem, Meier and Baladi estimate the total
Klett, T.R.
2011-01-01
The U.S. Geological Survey, using a geology-based assessment methodology, estimated mean volumes of technically recoverable, conventional, undiscovered petroleum resources at 218 million barrels of crude oil, 4.1 trillion cubic feet of natural gas, and 94 million barrels of natural gas liquids for the Azov-Kuban Basin Province as part of a program to estimate petroleum resources for priority basins throughout the world.
[The reference pricing of pharmaceuticals in European countries].
Gildeyeva, G N; Starykh, D A
2013-01-01
The article presents the analysis of various approaches to estimation of pharmaceuticals prices in conditions of actual systems of pharmaceuticals support. The pricing is considered in pegging to actual systems of pharmaceuticals support based on the principles of insurance and co-financing. The detailed analysis is presented concerning the methodology of estimation of reference prices of pharmaceuticals in different countries of Europe. The experience of European countries in evaluation of interchangeability of pharmaceuticals is discussed.
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
Resource management and nonmarket valuation research
Douglas, A.J.; Taylor, J.G.
1999-01-01
Survey based nonmarket valuation research is often regarded as economics research. However, resource economists need to be aware of and acknowledge the manifold information sources that they employ in order to enhance the policy credibility of their studies. Communication between resource economists and practitioners of allied disciplines including chemistry, civil engineering, sociology, and anthropology are often neglected. Recent resource allocation policy debates have given rise to an extensive discussion of methodological issues that narrow the scope of the subject. The present paper provides a format for the presentation of nonmarket valuation research results that emphasizes the manifold links between economics studies that employ different methodologies to estimate nonmarket resource values. A more robust emphasis on the interlocking features of the different approaches for estimating nonmarket benefits should foster appreciation of the transdisciplinary aspects of the subject.
Enomoto, Catherine B.; Trippi, Michael H.; Higley, Debra K.; Rouse, William A.; Dulong, Frank T.; Klett, Timothy R.; Mercier, Tracey J.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Marra, Kristen R.; Le, Phuong A.; Woodall, Cheryl A.; Schenk, Christopher J.
2018-04-19
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 10.7 trillion cubic feet of natural gas in Upper Devonian shales of the Appalachian Basin Province.
Development and Evaluation of an ADME-informed High Throughput Exposure Estimation Tool
EPA’s Chemical Safety for Sustainability (CSS) research program has been developing new ways to prioritize chemicals used in consumer products and articles. Using a risk-based methodology to account for both toxicity and exposure offers a comprehensive and systematic approa...
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Klett, Timothy R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Pitman, Janet K.
2016-05-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean conventional resources of 68 million barrels of oil and 964 billion cubic feet of gas in the Cooper and Eromanga Basins of Australia.
Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Finn, Thomas M.
2016-12-09
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a mean of 20 trillion cubic feet of undiscovered, technically recoverable coalbed gas resource in the Central and South Sumatra Basin Provinces of Indonesia.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.; Pollastro, Richard M.; Weaver, Jean N.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 19 billion barrels of oil and 83 trillion cubic feet of undiscovered natural gas resources in 10 geologic provinces of Mexico, Guatemala, and Belize.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Kirschbaum, Mark A.; Pitman, Janet K.; Pollastro, Richard M.; Tennyson, Marilyn E.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 126 billion barrels of oil and 679 trillion cubic feet of undiscovered natural gas in 31 geologic provinces of South America and the Caribbean.
Assessment of undiscovered conventional oil and gas resources of North Africa, 2012
Schenk, Christopher J.; Klett, Timothy R.; Whidden, Katherine J.; Kirschbaum, Mark A.; Charpentier, Ronald R.; Cook, Troy A.; Brownfield, Michael E.; Pitman, Janet K.
2013-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 19 billion barrels of technically recoverable undiscovered conventional oil and 370 trillion cubic feet of undiscovered conventional natural gas resources in 8 geologic provinces of North Africa.
Assessment of continuous oil and gas resources in the San Jorge Basin Province, Argentina, 2017
Schenk, Christopher J.; Mercier, Tracey J.; Hawkins, Sarah J.; Tennyson, Marilyn E.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-07-18
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 78 million barrels of oil and 8.9 trillion cubic feet of gas in the San Jorge Basin Province, Argentina.
Commercial Demand Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
Assessment of shale-oil resources of the Central Sumatra Basin, Indonesia, 2015
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Leathers-Miller, Heidi M.
2015-11-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 459 million barrels of shale oil, 275 billion cubic feet of associated gas, and 23 million barrels of natural gas liquids in the Central Sumatra Basin, Indonesia.
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borque, Paloma; Luke, Edward; Kollias, Pavlos
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
Pattern recognition of satellite cloud imagery for improved weather prediction
NASA Technical Reports Server (NTRS)
Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.
1986-01-01
The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.
Harmonised pesticide risk trend indicator for food (HAPERITIF): The methodological approach.
Calliera, Maura; Finizio, Antonio; Azimonti, Giovanna; Benfenati, Emilio; Trevisan, Marco
2006-12-01
To provide a harmonised European approach for pesticide risk indicators, the Sixth EU Framework Programme recently financed the HAIR (HArmonised environmental Indicators for pesticide Risk) project. This paper illustrates the methodology underlying a new indicator-HAPERITIF (HArmonised PEsticide RIsk Trend Indicator for Food), developed in HAIR, for tracking acute and chronic pesticide risk trends for consumers. The acute indicator, HAPERITIF(ac), is based on the ratio between an estimated short-term intake (ESTI), calculated as recommended by the World Health Organisation (WHO), and the acute reference dose (ARfD); the chronic indicator HAPERITIF(chr) is based on the ratio between an estimated daily intake (EDI) and the admissible daily intake (ADI). HAPERITIF can be applied at different levels of aggregation. Each level gives information for proper risk management of pesticides to reduce the risk associated with food consumption. An example of application using realistic scenarios of pesticide treatments on a potato crop in central-northern Italy is reported to illustrate the different steps of HAPERITIF. Copyright 2006 Society of Chemical Industry.
Evaluating the risk of industrial espionage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bott, T.F.
1998-12-31
A methodology for estimating the relative probabilities of different compromise paths for protected information by insider and visitor intelligence collectors has been developed based on an event-tree analysis of the intelligence collection operation. The analyst identifies target information and ultimate users who might attempt to gain that information. The analyst then uses an event tree to develop a set of compromise paths. Probability models are developed for each of the compromise paths that user parameters based on expert judgment or historical data on security violations. The resulting probability estimates indicate the relative likelihood of different compromise paths and provide anmore » input for security resource allocation. Application of the methodology is demonstrated using a national security example. A set of compromise paths and probability models specifically addressing this example espionage problem are developed. The probability models for hard-copy information compromise paths are quantified as an illustration of the results using parametric values representative of historical data available in secure facilities, supplemented where necessary by expert judgment.« less
MEGASTAR: The Meaning of Energy Growth: An Assessment of Systems, Technologies, and Requirements
NASA Technical Reports Server (NTRS)
1974-01-01
A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach that includes the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption for the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario. The total requirements and the energy subsystems for each scenario are assessed for their primary impacts in the areas of society, the environment, technology and the economy.
CO2 storage capacity estimation: Methodology and gaps
Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.
2007-01-01
Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd. All rights reserved.
Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.
The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less
The economic burden of physical inactivity: a systematic review and critical appraisal.
Ding, Ding; Kolbe-Alexander, Tracy; Nguyen, Binh; Katzmarzyk, Peter T; Pratt, Michael; Lawson, Kenny D
2017-10-01
To summarise the literature on the economic burden of physical inactivity in populations, with emphases on appraising the methodologies and providing recommendations for future studies. Systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PROSPERO registration number CRD42016047705). Electronic databases for peer-reviewed and grey literature were systematically searched, followed by reference searching and consultation with experts. Studies that examined the economic consequences of physical inactivity in a population/population-based sample, with clearly stated methodologies and at least an abstract/summary written in English. Of the 40 eligible studies, 27 focused on direct healthcare costs only, 13 also estimated indirect costs and one study additionally estimated household costs. For direct costs, 23 studies used a population attributable fraction (PAF) approach with estimated healthcare costs attributable to physical inactivity ranging from 0.3% to 4.6% of national healthcare expenditure; 17 studies used an econometric approach, which tended to yield higher estimates than those using a PAF approach. For indirect costs, 10 studies used a human capital approach, two used a friction cost approach and one used a value of a statistical life approach. Overall, estimates varied substantially, even within the same country, depending on analytical approaches, time frame and other methodological considerations. Estimating the economic burden of physical inactivity is an area of increasing importance that requires further development. There is a marked lack of consistency in methodological approaches and transparency of reporting. Future studies could benefit from cross-disciplinary collaborations involving economists and physical activity experts, taking a societal perspective and following best practices in conducting and reporting analysis, including accounting for potential confounding, reverse causality and comorbidity, applying discounting and sensitivity analysis, and reporting assumptions, limitations and justifications for approaches taken. We have adapted the Consolidated Health Economic Evaluation Reporting Standards checklist as a guide for future estimates of the economic burden of physical inactivity and other risk factors. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
New Methodology for Natural Gas Production Estimates
2010-01-01
A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.
Revised estimates for direct-effect recreational jobs in the interior Columbia River basin.
Lisa K. Crone; Richard W. Haynes
1999-01-01
This paper reviews the methodology used to derive the original estimates for direct employment associated with recreation on Federal lands in the interior Columbia River basin (the basin), and details the changes in methodology and data used to derive new estimates. The new analysis resulted in an estimate of 77,655 direct-effect jobs associated with recreational...
Comparison of flavonoid intake assessment methods.
Ivey, Kerry L; Croft, Kevin; Prince, Richard L; Hodgson, Jonathan M
2016-09-14
Flavonoids are a diverse group of polyphenolic compounds found in high concentrations in many plant foods and beverages. High flavonoid intake has been associated with reduced risk of chronic disease. To date, population based studies have used the United States Department of Agriculture (USDA) food content database to determine habitual flavonoid intake. More recently, a new flavonoid food content database, Phenol-Explorer (PE), has been developed. However, the level of agreement between the two databases is yet to be explored. To compare the methods used to create each database, and to explore the level of agreement between the flavonoid intake estimates derived from USDA and PE data. The study population included 1063 randomly selected women aged over 75 years. Two separate intake estimates were determined using food composition data from the USDA and the PE databases. There were many similarities in methods used to create each database; however, there are several methodological differences that manifest themselves in differences in flavonoid intake estimates between the 2 databases. Despite differences in net estimates, there was a strong level of agreement between total-flavonoid, flavanol, flavanone and anthocyanidin intake estimates derived from each database. Intake estimates for flavanol monomers showed greater agreement than flavanol polymers. The level of agreement between the two databases was the weakest for the flavonol and flavone intake estimates. In this population, the application of USDA and PE source data yielded highly correlated intake estimates for total-flavonoids, flavanols, flavanones and anthocyanidins. For these sub-classes, the USDA and PE databases may be used interchangeably in epidemiological investigations. There was poorer correlation between intake estimates for flavonols and flavones due to differences in USDA and PE methodologies. Individual flavonoid compound groups that comprise flavonoid sub-classes had varying levels of agreement. As such, when determining the appropriate database to calculate flavonoid intake variables, it is important to consider methodologies underpinning database creation and which foods are important contributors to dietary intake in the population of interest.
NASA Astrophysics Data System (ADS)
McJannet, D. L.; Cook, F. J.; McGloin, R. P.; McGowan, H. A.; Burn, S.
2011-05-01
The use of scintillometers to determine sensible and latent heat flux is becoming increasingly common because of their ability to quantify convective fluxes over distances of hundreds of meters to several kilometers. The majority of investigations using scintillometry have focused on processes above land surfaces, but here we propose a new methodology for obtaining sensible and latent heat fluxes from a scintillometer deployed over open water. This methodology has been tested by comparison with eddy covariance measurements and through comparison with alternative scintillometer calculation approaches that are commonly used in the literature. The methodology is based on linearization of the Bowen ratio, which is a common assumption in models such as Penman's model and its derivatives. Comparison of latent heat flux estimates from the eddy covariance system and the scintillometer showed excellent agreement across a range of weather conditions and flux rates, giving a high level of confidence in scintillometry-derived latent heat fluxes. The proposed approach produced better estimates than other scintillometry calculation methods because of the reliance of alternative methods on measurements of water temperature or water body heat storage, which are both notoriously hard to quantify. The proposed methodology requires less instrumentation than alternative scintillometer calculation approaches, and the spatial scales of required measurements are arguably more compatible. In addition to scintillometer measurements of the structure parameter of the refractive index of air, the only measurements required are atmospheric pressure, air temperature, humidity, and wind speed at one height over the water body.
Measurement-based auralization methodology for the assessment of noise mitigation measures
NASA Astrophysics Data System (ADS)
Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick
2016-09-01
The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.
NASA Technical Reports Server (NTRS)
Ebeling, Charles; Beasley, Kenneth D.
1992-01-01
The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service
NASA Astrophysics Data System (ADS)
Rai, Sudhendu
This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
Martins, Rui; Oliveira, Paulo Eduardo; Schmitt, Aurore
2012-06-10
We discuss here the estimation of age at death from two indicators (pubic symphysis and the sacro-pelvic surface of the ilium) based on four different osteological series from Portugal, Great-Britain, South Africa or USA (European origin). These samples and the scoring system of the two indicators were used by Schmitt et al. (2002), applying the methodology proposed by Lucy et al. (1996). In the present work, the same data was processed using a modification of the empirical method proposed by Lucy et al. (2002). The various probability distributions are estimated from training data by using kernel density procedures and Jackknife methodology. Bayes's theorem is then used to produce the posterior distribution from which point and interval estimates may be made. This statistical approach reduces the bias of the estimates to less than 70% of what was obtained by the initial method. This reduction going up to 52% if knowledge of sex of the individual is available, and produces an age for all the individuals that improves age at death assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Radiance Assimilation Shows Promise for Snowpack Characterization: A 1-D Case Study
NASA Technical Reports Server (NTRS)
Durand, Michael; Kim, Edward; Margulis, Steve
2008-01-01
We demonstrate an ensemble-based radiometric data assimilation (DA) methodology for estimating snow depth and snow grain size using ground-based passive microwave (PM) observations at 18.7 and 36.5 GHz collected during the NASA CLPX-1, March 2003, Colorado, USA. A land surface model was used to develop a prior estimate of the snowpack states, and a radiative transfer model was used to relate the modeled states to the observations. Snow depth bias was -53.3 cm prior to the assimilation, and -7.3 cm after the assimilation. Snow depth estimated by a non-DA-based retrieval algorithm using the same PM data had a bias of -18.3 cm. The sensitivity of the assimilation scheme to the grain size uncertainty was evaluated; over the range of grain size uncertainty tested, the posterior snow depth estimate bias ranges from -2.99 cm to -9.85 cm, which is uniformly better than both the prior and retrieval estimates. This study demonstrates the potential applicability of radiometric DA at larger scales.
NASA Astrophysics Data System (ADS)
Sirirojvisuth, Apinut
In complex aerospace system design, making an effective design decision requires multidisciplinary knowledge from both product and process perspectives. Integrating manufacturing considerations into the design process is most valuable during the early design stages since designers have more freedom to integrate new ideas when changes are relatively inexpensive in terms of time and effort. Several metrics related to manufacturability are cost, time, and manufacturing readiness level (MRL). Yet, there is a lack of structured methodology that quantifies how changes in the design decisions impact these metrics. As a result, a new set of integrated cost analysis tools are proposed in this study to quantify the impacts. Equally important is the capability to integrate this new cost tool into the existing design methodologies without sacrificing agility and flexibility required during the early design phases. To demonstrate the applicability of this concept, a ModelCenter environment is used to develop software architecture that represents Integrated Product and Process Development (IPPD) methodology used in several aerospace systems designs. The environment seamlessly integrates product and process analysis tools and makes effective transition from one design phase to the other while retaining knowledge gained a priori. Then, an advanced cost estimating tool called Hybrid Lifecycle Cost Estimating Tool (HLCET), a hybrid combination of weight-, process-, and activity-based estimating techniques, is integrated with the design framework. A new weight-based lifecycle cost model is created based on Tailored Cost Model (TCM) equations [3]. This lifecycle cost tool estimates the program cost based on vehicle component weights and programmatic assumptions. Additional high fidelity cost tools like process-based and activity-based cost analysis methods can be used to modify the baseline TCM result as more knowledge is accumulated over design iterations. Therefore, with this concept, the additional manufacturing knowledge can be used to identify a more accurate lifecycle cost and facilitate higher fidelity tradeoffs during conceptual and preliminary design. Advanced Composite Cost Estimating Model (ACCEM) is employed as a process-based cost component to replace the original TCM result of the composite part production cost. The reason for the replacement is that TCM estimates production costs from part weights as a result of subtractive manufacturing of metallic origin such as casting, forging, and machining processes. A complexity factor can sometimes be adjusted to reflect different types of metal and machine settings. The TCM assumption, however, gives erroneous results when applied to additive processes like those of composite manufacturing. Another innovative aspect of this research is the introduction of a work measurement technique called Maynard Operation Sequence Technique (MOST) to be used, similarly to Activity-Based Costing (ABC) approach, to estimate manufacturing time of a part by virtue of breaking down the operations occurred during its production. ABC allows a realistic determination of cost incurred in each activity, as opposed to using a traditional method of time estimation by analogy or using response surface equations from historical process data. The MOST concept provides a tailored study of an individual process typically required for a new, innovative design. Nevertheless, the MOST idea has some challenges, one of which is its requirement to build a new process from ground up. The process development requires a Subject Matter Expertise (SME) in manufacturing method of the particular design. The SME must have also a comprehensive understanding of the MOST system so that the correct parameters are chosen. In practice, these knowledge requirements may demand people from outside of the design discipline and a priori training of MOST. To relieve the constraint, this study includes an entirely new sub-system architecture that comprises 1) a knowledge-based system to provide the required knowledge during the process selection; and 2) a new user-interface to guide the parameter selection when building the process using MOST. Also included in this study is the demonstration of how the HLCET and its constituents can be integrated with a Georgia Tech' Integrated Product and Process Development (IPPD) methodology. The applicability of this work will be shown through a complex aerospace design example to gain insights into how manufacturing knowledge helps make better design decisions during the early stages. The setup process is explained with an example of its utility demonstrated in a hypothetical fighter aircraft wing redesign. The evaluation of the system effectiveness against existing methodologies is illustrated to conclude the thesis.
NASA Astrophysics Data System (ADS)
Pellarin, Thierry; Brocca, Luca; Crow, Wade; Kerr, Yann; Massari, Christian; Román-Cascón, Carlos; Fernández, Diego
2017-04-01
Recent studies have demonstrated the usefulness of soil moisture retrieved from satellite for improving rainfall estimations of satellite based precipitation products (SBPP). The real-time version of these products are known to be biased from the real precipitation observed at the ground. Therefore, the information contained in soil moisture can be used to correct the inaccuracy and uncertainty of these products, since the value and behavior of this soil variable preserve the information of a rain event even for several days. In this work, we take advantage of the soil moisture data from the Soil Moisture and Ocean Salinity (SMOS) satellite, which provides information with a quite appropriate temporal and spatial resolution for correcting rainfall events. Specifically, we test and compare the ability of three different methodologies for this aim: 1) SM2RAIN, which directly relate changes in soil moisture to rainfall quantities; 2) The LMAA methodology, which is based on the assimilation of soil moisture in two models of different complexity (see EGU2017-5324 in this same session); 3) The SMART method, based on the assimilation of soil moisture in a simple hydrological model with a different assimilation/modelling technique. The results are tested for 6 years over 10 sites around the world with different features (land surface, rainfall climatology, orography complexity, etc.). These preliminary and promising results are shown here for the first time to the scientific community, as also the observed limitations of the different methodologies. Specific remarks on the technical configurations, filtering/smoothing of SMOS soil moisture or re-scaling techniques are also provided from the results of different sensitivity experiments.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM
NASA Astrophysics Data System (ADS)
Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz
2018-03-01
The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.
Holt, James B.; Zhang, Xingyou; Lu, Hua; Shah, Snehal N.; Dooley, Daniel P.; Matthews, Kevin A.; Croft, Janet B.
2017-01-01
Introduction Local health authorities need small-area estimates for prevalence of chronic diseases and health behaviors for multiple purposes. We generated city-level and census-tract–level prevalence estimates of 27 measures for the 500 largest US cities. Methods To validate the methodology, we constructed multilevel logistic regressions to predict 10 selected health indicators among adults aged 18 years or older by using 2013 Behavioral Risk Factor Surveillance System (BRFSS) data; we applied their predicted probabilities to census population data to generate city-level, neighborhood-level, and zip-code–level estimates for the city of Boston, Massachusetts. Results By comparing the predicted estimates with their corresponding direct estimates from a locally administered survey (Boston BRFSS 2010 and 2013), we found that our model-based estimates for most of the selected health indicators at the city level were close to the direct estimates from the local survey. We also found strong correlation between the model-based estimates and direct survey estimates at neighborhood and zip code levels for most indicators. Conclusion Findings suggest that our model-based estimates are reliable and valid at the city level for certain health outcomes. Local health authorities can use the neighborhood-level estimates if high quality local health survey data are not otherwise available. PMID:29049020
Point estimation following two-stage adaptive threshold enrichment clinical trials.
Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel
2018-05-31
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Age diagnosis based on incremental lines in dental cementum: a critical reflection.
Grosskopf, Birgit; McGlynn, George
2011-01-01
Age estimation based on the counting of incremental lines in dental cementum is a method frequently used for the estimation of the age at death for humans in bioarchaeology, and increasingly, forensic anthropology. Assessment of applicability, precision, and method reproducibility continue to be the focus of research in this area, and are occasionally accompanied by significant controversy. Differences in methodological techniques for data collection (e.g. number of sections, factor of magnification for counting or interpreting "outliers") are presented. Potential influences on method reliability are discussed, especially for their applicability in forensic contexts.
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots
NASA Astrophysics Data System (ADS)
Chellal, R.; Cuvillon, L.; Laroche, E.
2017-04-01
This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.
Mechanical modulation method for ultrasensitive phase measurements in photonics biosensing.
Patskovsky, S; Maisonneuve, M; Meunier, M; Kabashin, A V
2008-12-22
A novel polarimetry methodology for phase-sensitive measurements in single reflection geometry is proposed for applications in optical transduction-based biological sensing. The methodology uses altering step-like chopper-based mechanical phase modulation for orthogonal s- and p- polarizations of light reflected from the sensing interface and the extraction of phase information at different harmonics of the modulation. We show that even under a relatively simple experimental arrangement, the methodology provides the resolution of phase measurements as low as 0.007 deg. We also examine the proposed approach using Total Internal Reflection (TIR) and Surface Plasmon Resonance (SPR) geometries. For TIR geometry, the response appears to be strongly dependent on the prism material with the best values for high refractive index Si. The detection limit for Si-based TIR is estimated as 10(-5) in terms Refractive Index Units (RIU) change. SPR geometry offers much stronger phase response due to a much sharper phase characteristics. With the detection limit of 3.2*10(-7) RIU, the proposed methodology provides one of best sensitivities for phase-sensitive SPR devices. Advantages of the proposed method include high sensitivity, simplicity of experimental setup and noise immunity as a result of a high stability modulation.
Observation-Corrected Precipitation Estimates in GEOS-5
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Liu, Qing
2014-01-01
Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.
The cost of child health inequalities in Aotearoa New Zealand: a preliminary scoping study
2012-01-01
Background Health inequalities have been extensively documented, internationally and in New Zealand. The cost of reducing health inequities is often perceived as high; however, recent international studies suggest the cost of “doing nothing” is itself significant. This study aimed to develop a preliminary estimate of the economic cost of health inequities between Māori (indigenous) and non-Māori children in New Zealand. Methods Standard quantitative epidemiological methods and “cost of illness” methodology were employed, within a Kaupapa Māori theoretical framework. Data were obtained from national data collections held by the New Zealand Health Information Service and other health sector agencies. Results Preliminary estimates suggest child health inequities between Māori and non-Māori in New Zealand are cost-saving to the health sector. However the societal costs are significant. A conservative “base case” scenario estimate is over $NZ62 million per year, while alternative costing methods yield larger costs of nearly $NZ200 million per annum. The total cost estimate is highly sensitive to the costing method used and Value of Statistical Life applied, as the cost of potentially avoidable deaths of Māori children is the major contributor to this estimate. Conclusions This preliminary study suggests that health sector spending is skewed towards non-Māori children despite evidence of greater Māori need. Persistent child health inequities result in significant societal economic costs. Eliminating child health inequities, particularly in primary care access, could result in significant economic benefits for New Zealand. However, there are conceptual, ethical and methodological challenges in estimating the economic cost of child health inequities. Re-thinking of traditional economic frameworks and development of more appropriate methodologies is required. PMID:22640030
The cost of child health inequalities in Aotearoa New Zealand: a preliminary scoping study.
Mills, Clair; Reid, Papaarangi; Vaithianathan, Rhema
2012-05-28
Health inequalities have been extensively documented, internationally and in New Zealand. The cost of reducing health inequities is often perceived as high; however, recent international studies suggest the cost of "doing nothing" is itself significant. This study aimed to develop a preliminary estimate of the economic cost of health inequities between Māori (indigenous) and non-Māori children in New Zealand. Standard quantitative epidemiological methods and "cost of illness" methodology were employed, within a Kaupapa Māori theoretical framework. Data were obtained from national data collections held by the New Zealand Health Information Service and other health sector agencies. Preliminary estimates suggest child health inequities between Māori and non-Māori in New Zealand are cost-saving to the health sector. However the societal costs are significant. A conservative "base case" scenario estimate is over $NZ62 million per year, while alternative costing methods yield larger costs of nearly $NZ200 million per annum. The total cost estimate is highly sensitive to the costing method used and Value of Statistical Life applied, as the cost of potentially avoidable deaths of Māori children is the major contributor to this estimate. This preliminary study suggests that health sector spending is skewed towards non-Māori children despite evidence of greater Māori need. Persistent child health inequities result in significant societal economic costs. Eliminating child health inequities, particularly in primary care access, could result in significant economic benefits for New Zealand. However, there are conceptual, ethical and methodological challenges in estimating the economic cost of child health inequities. Re-thinking of traditional economic frameworks and development of more appropriate methodologies is required.
Assessment of undiscovered oil and gas resources in the Canning Basin Province, Australia, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-05-31
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 1.3 billion barrels of oil and 34.4 trillion cubic feet of gas in the Canning Basin Province of Australia.
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-02-24
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 4.5 trillion cubic feet of coalbed gas in the Kalahari Basin Province of Botswana, Zambia, and Zimbabwe, Africa.
Assessment of undiscovered continuous gas resources of the Ordos Basin Province, China, 2015
Charpentier, Ronald R.; Klett, Timothy R.; Schenk, Christopher J.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.; Mercier, Tracey J.
2016-01-11
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean resources of 28 trillion cubic feet of tight gas and 5.6 trillion cubic feet of coalbed gas in upper Paleozoic rocks in the Ordos Basin Province, China.
,
2012-01-01
Using a performance-based geologic assessment methodology, the U.S. Geological Survey estimated a technically recoverable mean volume of 6.1 trillion cubic feet of potential shale gas in the Bombay, Cauvery, and Krishna-Godavari Provinces of India.
Assessment of continuous oil and gas resources in the Pannonian Basin Province, Hungary, 2016
Schenk, Christopher J.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Leathers-Miller, Heidi M.
2017-06-29
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 119 million barrels of oil and 944 billion cubic feet of gas in the Hungarian part of the Pannonian Basin Province.
Community duplicate diet methodology: A new tool for estimating dietary exposure to pesticides
An observational field study was conducted to assess the feasibility of a community duplicate diet collection method; a dietary monitoring procedure that is population-based. The purpose was to establish an alternative procedure to duplicate diet sampling that would be more effi...
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Finn, Thomas M.; Brownfield, Michael E.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers-Miller, Heidi M.; Potter, Christopher J.
2018-02-07
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 2.0 billion barrels of oil and 20.3 trillion cubic feet of gas in the Bohaiwan Basin Province, China.
Use of geographic information systems in rabies vaccination campaigns.
Grisi-Filho, José Henrique de Hildebrand e; Amaku, Marcos; Dias, Ricardo Augusto; Montenegro Netto, Hildebrando; Paranhos, Noemia Tucunduva; Mendes, Maria Cristina Novo Campos; Ferreira Neto, José Soares; Ferreira, Fernando
2008-12-01
To develop a method to assist in the design and assessment of animal rabies control campaigns. A methodology was developed based on geographic information systems to estimate the animal (canine and feline) population and density per census tract and per subregion (known as "Subprefeituras") in the city of São Paulo (Southeastern Brazil) in 2002. The number of vaccination units in a given region was estimated to achieve a certain proportion of vaccination coverage. Census database was used for the human population, as well as estimates ratios of dog:inhabitant and cat:inhabitant. Estimated figures were 1,490,500 dogs and 226,954 cats in the city, i.e. an animal population density of 1138.14 owned animals per km(2). In the 2002 campaign, 926,462 were vaccinated, resulting in a vaccination coverage of 54%. The estimated number of vaccination units to be able to reach a 70%-vaccination coverage, by vaccinating 700 animals per unit on average, was 1,729. These estimates are presented as maps of animal density according to census tracts and "Subprefeituras". The methodology used in the study may be applied in a systematic way to the design and evaluation of rabies vaccination campaigns, enabling the identification of areas of critical vaccination coverage.
Quantile uncertainty and value-at-risk model risk.
Alexander, Carol; Sarabia, José María
2012-08-01
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.
Kolokotroni, Eleni; Dionysiou, Dimitra; Veith, Christian; Kim, Yoo-Jin; Franz, Astrid; Grgic, Aleksandar; Bohle, Rainer M.; Stamatakos, Georgios
2016-01-01
The 5-year survival of non-small cell lung cancer patients can be as low as 1% in advanced stages. For patients with resectable disease, the successful choice of preoperative chemotherapy is critical to eliminate micrometastasis and improve operability. In silico experimentations can suggest the optimal treatment protocol for each patient based on their own multiscale data. A determinant for reliable predictions is the a priori estimation of the drugs’ cytotoxic efficacy on cancer cells for a given treatment. In the present work a mechanistic model of cancer response to treatment is applied for the estimation of a plausible value range of the cell killing efficacy of various cisplatin-based doublet regimens. Among others, the model incorporates the cancer related mechanism of uncontrolled proliferation, population heterogeneity, hypoxia and treatment resistance. The methodology is based on the provision of tumor volumetric data at two time points, before and after or during treatment. It takes into account the effect of tumor microenvironment and cell repopulation on treatment outcome. A thorough sensitivity analysis based on one-factor-at-a-time and latin hypercube sampling/partial rank correlation coefficient approaches has established the volume growth rate and the growth fraction at diagnosis as key features for more accurate estimates. The methodology is applied on the retrospective data of thirteen patients with non-small cell lung cancer who received cisplatin in combination with gemcitabine, vinorelbine or docetaxel in the neoadjuvant context. The selection of model input values has been guided by a comprehensive literature survey on cancer-specific proliferation kinetics. The latin hypercube sampling has been recruited to compensate for patient-specific uncertainties. Concluding, the present work provides a quantitative framework for the estimation of the in-vivo cell-killing ability of various chemotherapies. Correlation studies of such estimates with the molecular profile of patients could serve as a basis for reliable personalized predictions. PMID:27657742
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
A multifractal approach to space-filling recovery for PET quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O.; Tsoumpas, Charalampos
2014-11-01
Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal andmore » synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.« less
Samantra, Chitrasen; Datta, Saurav; Mahapatra, Siba Sankar
2017-03-01
In the context of underground coal mining industry, the increased economic issues regarding implementation of additional safety measure systems, along with growing public awareness to ensure high level of workers safety, have put great pressure on the managers towards finding the best solution to ensure safe as well as economically viable alternative selection. Risk-based decision support system plays an important role in finding such solutions amongst candidate alternatives with respect to multiple decision criteria. Therefore, in this paper, a unified risk-based decision-making methodology has been proposed for selecting an appropriate safety measure system in relation to an underground coal mining industry with respect to multiple risk criteria such as financial risk, operating risk, and maintenance risk. The proposed methodology uses interval-valued fuzzy set theory for modelling vagueness and subjectivity in the estimates of fuzzy risk ratings for making appropriate decision. The methodology is based on the aggregative fuzzy risk analysis and multi-criteria decision making. The selection decisions are made within the context of understanding the total integrated risk that is likely to incur while adapting the particular safety system alternative. Effectiveness of the proposed methodology has been validated through a real-time case study. The result in the context of final priority ranking is seemed fairly consistent.
Design Optimization of Gas Generator Hybrid Propulsion Boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight; Fink, Larry
1990-01-01
A methodology used in support of a study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specific optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
Application of Steinberg vibration fatigue model for structural verification of space instruments
NASA Astrophysics Data System (ADS)
García, Andrés; Sorribes-Palmer, Félix; Alonso, Gustavo
2018-01-01
Electronic components in spaceships are subjected to vibration loads during the ascent phase of the launcher. It is important to verify by tests and analysis that all parts can survive in the most severe load cases. The purpose of this paper is to present the methodology and results of the application of the Steinberg's fatigue model to estimate the life of electronic components of the EPT-HET instrument for the Solar Orbiter space mission. A Nastran finite element model (FEM) of the EPT-HET instrument was created and used for the structural analysis. The methodology is based on the use of the FEM of the entire instrument to calculate the relative displacement RDSD and RMS values of the PCBs from random vibration analysis. These values are used to estimate the fatigue life of the most susceptible electronic components with the Steinberg's fatigue damage equation and the Miner's cumulative fatigue index. The estimations are calculated for two different configurations of the instrument and three different inputs in order to support the redesign process. Finally, these analytical results are contrasted with the inspections and the functional tests made after the vibration tests, concluding that this methodology can adequately predict the fatigue damage or survival of the electronic components.
The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality
Del Giudice, Marco; Booth, Tom; Irwing, Paul
2012-01-01
Background Sex differences in personality are believed to be comparatively small. However, research in this area has suffered from significant methodological limitations. We advance a set of guidelines for overcoming those limitations: (a) measure personality with a higher resolution than that afforded by the Big Five; (b) estimate sex differences on latent factors; and (c) assess global sex differences with multivariate effect sizes. We then apply these guidelines to a large, representative adult sample, and obtain what is presently the best estimate of global sex differences in personality. Methodology/Principal Findings Personality measures were obtained from a large US sample (N = 10,261) with the 16PF Questionnaire. Multigroup latent variable modeling was used to estimate sex differences on individual personality dimensions, which were then aggregated to yield a multivariate effect size (Mahalanobis D). We found a global effect size D = 2.71, corresponding to an overlap of only 10% between the male and female distributions. Even excluding the factor showing the largest univariate ES, the global effect size was D = 1.71 (24% overlap). These are extremely large differences by psychological standards. Significance The idea that there are only minor differences between the personality profiles of males and females should be rejected as based on inadequate methodology. PMID:22238596
Coupling Computer-Aided Process Simulation and ...
A methodology is described for developing a gate-to-gate life cycle inventory (LCI) of a chemical manufacturing process to support the application of life cycle assessment in the design and regulation of sustainable chemicals. The inventories were derived by first applying process design and simulation of develop a process flow diagram describing the energy and basic material flows of the system. Additional techniques developed by the U.S. Environmental Protection Agency for estimating uncontrolled emissions from chemical processing equipment were then applied to obtain a detailed emission profile for the process. Finally, land use for the process was estimated using a simple sizing model. The methodology was applied to a case study of acetic acid production based on the Cativa tm process. The results reveal improvements in the qualitative LCI for acetic acid production compared to commonly used databases and top-down methodologies. The modeling techniques improve the quantitative LCI results for inputs and uncontrolled emissions. With provisions for applying appropriate emission controls, the proposed method can provide an estimate of the LCI that can be used for subsequent life cycle assessments. As part of its mission, the Agency is tasked with overseeing the use of chemicals in commerce. This can include consideration of a chemical's potential impact on health and safety, resource conservation, clean air and climate change, clean water, and sustainable
Sanchez, E Y; Represa, S; Mellado, D; Balbi, K B; Acquesta, A D; Colman Lerner, J E; Porta, A A
2018-06-15
The potential impact of a technological accident can be assessed by risk estimation. Taking this into account, the latent or potential condition can be warned and mitigated. In this work we propose a methodology to estimate risk of technological hazards, focused on two components. The first one is the processing of meteorological databases to define the most probably and conservative scenario of study, and the second one, is the application of a local social vulnerability index to classify the population. In this case of study, the risk was estimated for a hypothetical release of liquefied ammonia in a meat-packing industry in the city of La Plata, Argentina. The method consists in integrating the simulated toxic threat zone with ALOHA software, and the layer of sociodemographic classification of the affected population. The results show the areas associated with higher risks of exposure to ammonia, which are worth being addressed for the prevention of disasters in the region. Advantageously, this systemic approach is methodologically flexible as it provides the possibility of being applied in various scenarios based on the available information of both, the exposed population and its meteorology. Furthermore, this methodology optimizes the processing of the input data and its calculation. Copyright © 2018 Elsevier B.V. All rights reserved.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach
ERIC Educational Resources Information Center
Stevenson, Glenn A.
2012-01-01
For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.
Fisher, Bonnie S
2009-02-01
The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.
Klett, T.R.; Schenk, Christopher J.; Wandrey, Craig J.; Charpentier, Ronald R.; Brownfield, Michael E.; Pitman, Janet K.; Pollastro, Richard M.; Cook, Troy A.; Tennyson, Marilyn E.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated volumes of undiscovered, technically recoverable, conventional petroleum resources for the southern Siberian craton provinces of Russia. The mean volumes were estimated at 3.0 billion barrels of crude oil, 63.3 trillion cubic feet of natural gas, and 1.2 billion barrels of natural gas liquids.
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Basu, J. P. (Principal Investigator)
1979-01-01
A natural stratum-based sampling scheme and the aggregation procedures for estimating wheat area, yield, and production and their associated prediction error estimates are described. The methodology utilizes LANDSAT imagery and agrophysical data to permit an improved stratification in foreign areas by ignoring political boundaries and restratifying along boundaries that are more homogeneous with respect to the distribution of agricultural density, soil characteristics, and average climatic conditions. A summary of test results is given including a discussion of the various problems encountered.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.
Theory of Partitioning of Disease Prevalence and Mortality in Observational Data
Akushevich, I.; Yashkin, A.; Kravchenko, J.; Fang, F.; Arbeev, K.; Sloan, F.; Yashin, AI
2017-01-01
In this study, we present a new theory of partitioning of disease prevalence and incidence-based mortality and demonstrate how this theory practically works for analyses of Medicare data. In the theory, the prevalence of a disease and incidence-based mortality are modeled in terms of disease incidence and survival after diagnosis supplemented by information on disease prevalence at the initial age and year available in a dataset. Partitioning of the trends of prevalence and mortality is calculated with minimal assumptions. The resulting expressions for the components of the trends are given by continuous functions of data. The estimator is consistent and stable. The developed methodology is applied for data on type 2 diabetes using individual records from a nationally representative 5% sample of Medicare beneficiaries age 65+. Numerical estimates show excellent concordance between empirical estimates and theoretical predictions. Evaluated partitioning model showed that both prevalence and mortality increase with time. The primary driving factors of the observed prevalence increase are improved survival and increased prevalence at age 65. The increase in diabetes-related mortality is driven by increased prevalence and unobserved trends in time-periods and age-groups outside of the range of the data used in the study. Finally, the properties of the new estimator, possible statistical and systematical uncertainties, and future practical applications of this methodology in epidemiology, demography, public health and health forecasting are discussed. PMID:28130147
Shah, Tayyab Ikram; Milosavljevic, Stephan; Bath, Brenna
2017-06-01
This research is focused on methodological challenges and considerations associated with the estimation of the geographical aspects of access to healthcare with a focus on rural and remote areas. With the assumption that GIS-based accessibility measures for rural healthcare services will vary across geographic units of analysis and estimation techniques, which could influence the interpretation of spatial access to rural healthcare services. Estimations of geographical accessibility depend on variations of the following three parameters: 1) quality of input data; 2) accessibility method; and 3) geographical area. This research investigated the spatial distributions of physiotherapists (PTs) in comparison to family physicians (FPs) across Saskatchewan, Canada. The three-steps floating catchment areas (3SFCA) method was applied to calculate the accessibility scores for both PT and FP services at two different geographical units. A comparison of accessibility scores to simple healthcare provider-to-population ratios was also calculated. The results vary considerably depending on the accessibility methods used and the choice of geographical area unit for measuring geographical accessibility for both FP and PT services. These findings raise intriguing questions regarding the nature and extent of technical issues and methodological considerations that can affect GIS-based measures in health services research and planning. This study demonstrates how the selection of geographical areal units and different methods for measuring geographical accessibility could affect the distribution of healthcare resources in rural areas. These methodological issues have implications for determining where there is reduced access that will ultimately impact health human resource priorities and policies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.
2003-01-01
Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.
NASA Astrophysics Data System (ADS)
Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.; Braun, M.; Levine, Z. H.; Pintar, A. L.
2014-12-01
This work presents a methodology for constructing 2D estimates of CO2 field concentrations from integrated open path measurements of CO2 concentrations. It provides a description of the methodology, an assessment based on simulated data and results from preliminary field trials. The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) system, currently under development by Exelis and AER, consists of a set of laser-based transceivers and a number of retro-reflectors coupled with a cloud-based compute environment to enable real-time monitoring of integrated CO2 path concentrations, and provides 2D maps of estimated concentrations over an extended area of interest. The GreenLITE transceiver-reflector pairs provide laser absorption spectroscopy (LAS) measurements of differential absorption due to CO2 along intersecting chords within the field of interest. These differential absorption values for the intersecting chords of horizontal path are not only used to construct estimated values of integrated concentration, but also employed in an optimal estimation technique to derive 2D maps of underlying concentration fields. This optimal estimation technique combines these sparse data with in situ measurements of wind speed/direction and an analytic plume model to provide tomographic-like reconstruction of the field of interest. This work provides an assessment of this reconstruction method and preliminary results from the Fall 2014 testing at the Zero Emissions Research and Technology (ZERT) site in Bozeman, Montana. This work is funded in part under the GreenLITE program developed under a cooperative agreement between Exelis and the National Energy and Technology Laboratory (NETL) under the Department of Energy (DOE), contract # DE-FE0012574. Atmospheric and Environmental Research, Inc. is a major partner in this development.
Precipitation Estimation Using L-Band and C-Band Soil Moisture Retrievals
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Brocca, Luca; Crow, Wade T.; Burgin, Mariko S.; De Lannoy, Gabrielle J. M.
2016-01-01
An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterometer (ASCAT) mission. The precipitation estimates so obtained are evaluated against in situ (gauge-based) precipitation observations from across the globe. The precipitation estimation skill achieved using the L-band SMAP and SMOS data sets is higher than that obtained with the C-band product, as might be expected given that L-band is sensitive to a thicker layer of soil and thereby provides more information on the response of soil moisture to precipitation. The square of the correlation coefficient between the SMAP-based precipitation estimates and the observations (for aggregations to approximately100 km and 5 days) is on average about 0.6 in areas of high rain gauge density. Satellite missions specifically designed to monitor soil moisture thus do provide significant information on precipitation variability, information that could contribute to efforts in global precipitation estimation.
Inference on periodicity of circadian time series.
Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A
2013-09-01
Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Estimating neural response functions from fMRI
Kumar, Sukhbinder; Penny, William
2014-01-01
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246
Vertical distribution of aerosols in the vicinity of Mexico City during MILAGRO-2006 Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, P.A.; Kleinman, L.; Eichinger, W. E.
On 7 March 2006, a mobile, ground-based, vertical pointing, elastic lidar system made a North-South transect through the Mexico City basin. Column averaged, aerosol size distribution (ASD) measurements were made on the ground concurrently with the lidar measurements. The ASD ground measurements allowed calculation of the column averaged mass extinction efficiency (MEE) for the lidar system (1064 nm). The value of column averaged MEE was combined with spatially resolved lidar extinction coefficients to produce total aerosol mass concentration estimates with the resolution of the lidar (1.5 m vertical spatial and 1 s temporal). Airborne ASD measurements from DOE G-1 aircraftmore » made later in the day on 7 March 2006, allowed the evaluation of the assumptions of constant ASD with height and time used for estimating the column averaged MEE. The results showed that the aerosol loading within the basin is about twice what is observed outside of the basin. The total aerosol base concentrations observed in the basin are of the order of 200 {mu}g/m{sup 3} and the base levels outside are of the order of 100 {mu}g/m{sup 3}. The local heavy traffic events can introduce aerosol levels near the ground as high as 900 {mu}g/m{sup 3}. The article presents the methodology for estimating aerosol mass concentration from mobile, ground-based lidar measurements in combination with aerosol size distribution measurements. An uncertainty analysis of the methodology is also presented.« less
Comparing NASA and ESA Cost Estimating Methods for Human Missions to Mars
NASA Technical Reports Server (NTRS)
Hunt, Charles D.; vanPelt, Michel O.
2004-01-01
To compare working methodologies between the cost engineering functions in NASA Marshall Space Flight Center (MSFC) and ESA European Space Research and Technology Centre (ESTEC), as well as to set-up cost engineering capabilities for future manned Mars projects and other studies which involve similar subsystem technologies in MSFC and ESTEC, a demonstration cost estimate exercise was organized. This exercise was a direct way of enhancing not only cooperation between agencies but also both agencies commitment to credible cost analyses. Cost engineers in MSFC and ESTEC independently prepared life-cycle cost estimates for a reference human Mars project and subsequently compared the results and estimate methods in detail. As a non-sensitive, public domain reference case for human Mars projects, the Mars Direct concept was chosen. In this paper the results of the exercise are shown; the differences and similarities in estimate methodologies, philosophies, and databases between MSFC and ESTEC, as well as the estimate results for the Mars Direct concept. The most significant differences are explained and possible estimate improvements identified. In addition, the Mars Direct plan and the extensive cost breakdown structure jointly set-up by MSFC and ESTEC for this concept are presented. It was found that NASA applied estimate models mainly based on historic Apollo and Space Shuttle cost data, taking into account the changes in technology since then. ESA used models mostly based on European satellite and launcher cost data, taking into account the higher equipment and testing standards for human space flight. Most of NASA's and ESA s estimates for the Mars Direct case are comparable, but there are some important, consistent differences in the estimates for: 1) Large Structures and Thermal Control subsystems; 2) System Level Management, Engineering, Product Assurance and Assembly, Integration and Test/Verification activities; 3) Mission Control; 4) Space Agency Program Level activities.
42 CFR 495.110 - Preclusion on administrative and judicial review.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., hospital charges, charity charges, and Medicare share; and (ii) The period used to determine such estimate... EP is hospital-based; and (6) The specification of the EHR reporting period, as well as whether... eligible hospitals— (1) The methodology and standards for determining the incentive payment amounts made to...
Assessment of conventional oil resources of the East African Rift Province, East Africa, 2016
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-03-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean conventional resources of 13.4 billion barrels of oil and 4.6 trillion cubic feet of gas in the East African Rift Province of east Africa.
Kirschbaum, Mark A.; Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Brownfield, Michael E.; Pitman, Janet K.; Cook, Troy A.; Tennyson, Marilyn E.
2010-01-01
The U.S. Geological Survey estimated means of 1.8 billion barrels of recoverable oil, 223 trillion cubic feet of recoverable gas, and 6 billion barrels of natural gas liquids in the Nile Delta Basin Province using a geology-based assessment methodology.
Johnson, Ronald C.; Mercier, Tracey J.; Brownfield, Michael E.; Self, Jesse G.
2010-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a total of 1.32 trillion barrels of oil in place in 18 oil shale zones in the Eocene Green River Formation in the Uinta Basin, Utah and Colorado.
Assessment of potential shale gas and shale oil resources of the Norte Basin, Uruguay, 2011
Schenk, Christopher J.; Kirschbaum, Mark A.; Charpentier, Ronald R.; Cook, Troy; Klett, Timothy R.; Gautier, Donald L.; Pollastro, Richard M.; Weaver, Jean N.; Brownfield, Michael
2011-01-01
Using a performance-based geological assessment methodology, the U.S. Geological Survey estimated mean volumes of 13.4 trillion cubic feet of potential technically recoverable shale gas and 0.5 billion barrels of technically recoverable shale oil resources in the Norte Basin of Uruguay.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phoung A.; Pitman, Janet K.; Brownfield, Michael E.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Klett, Timothy R.
2017-03-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean continuous resources of 656 million barrels of oil and 5.7 trillion cubic feet of gas in the Maracaibo Basin Province, Venezuela and Colombia.
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Le, Phuong A.; Mercier, Tracey J.; Finn, Thomas M.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-09-22
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 0.45 billion barrels of oil and 1.0 trillion cubic feet of gas in the Middle and Upper Magdalena Basins, Colombia.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Woodall, Cheryl A.
2017-08-17
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 35.1 trillion cubic feet of gas in the Amu Darya Basin Province of Turkmenistan, Uzbekistan, Iran, and Afghanistan.
Toward a Linguistically Realistic Assessment of Language Vitality: The Case of Jejueo
ERIC Educational Resources Information Center
Yang, Changyong; O'Grady, William; Yang, Sejung
2017-01-01
The assessment of language endangerment requires accurate estimates of speaker populations, including information about the proficiency of different groups within those populations. Typically, this information is based on self-assessments, a methodology whose reliability is open to question. We outline an approach that seeks to improve the…
Assessment of continuous oil and gas resources of the South Sumatra Basin Province, Indonesia, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.
2016-12-09
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 689 million barrels of continuous shale oil and 3.9 trillion cubic feet of shale gas in the South Sumatra Basin Province in Indonesia.
Schenk, Christopher J.; Mercier, Tracey J.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Brownfield, Michael E.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Finn, Thomas M.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-02-16
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 26 million barrels of oil and 700 billion cubic feet of gas in the Wyoming Thrust Belt Province, Wyoming, Idaho, and Utah.
Assessing Financial Education Methods: Principles vs. Rules-of-Thumb Approaches
ERIC Educational Resources Information Center
Skimmyhorn, William L.; Davies, Evan R.; Mun, David; Mitchell, Brian
2016-01-01
Despite thousands of programs and tremendous public and private interest in improving financial decision-making, little is known about how best to teach financial education. Using an experimental approach, the authors estimated the effects of two different education methodologies (principles-based and rules-of-thumb) on the knowledge,…
Assessment of undiscovered oil and gas resources of Libya and Tunisia, 2010
Whidden, Katherine J.; Lewan, Michael; Schenk, Christopher J.; Charpentier, Rondald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.
2011-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 3.97 billion barrels of undiscovered oil, 38.5 trillion cubic feet of undiscovered natural gas, and 1.47 billion barrels of undiscovered natural gas liquids in two provinces of North Africa.
Federal Research and Development Contract Trends and the Supporting Industrial Base, 2000-2014
2016-04-30
Homeland Security, and government-wide services contracting trends; sourcing policy and cost estimation methodologies; and recent U.S. Army modernization ...been fears that the sharp downturn in federal contract obligations would disproportionately impact the R&D contracting portfolios within individual...329 - contracting portfolios , and the industrial base that supports those efforts, within each R&D contracting agency. The main finding of this
Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.
DOT National Transportation Integrated Search
1979-09-01
This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Fraher, Erin P; Knapton, Andy; Holmes, George M
2017-02-01
To outline a methodology for allocating graduate medical education (GME) training positions based on data from a workforce projection model. Demand for visits is derived from the Medical Expenditure Panel Survey and Census data. Physician supply, retirements, and geographic mobility are estimated using concatenated AMA Masterfiles and ABMS certification data. The number and specialization behaviors of residents are derived from the AAMC's GMETrack survey. We show how the methodology could be used to allocate 3,000 new GME slots over 5 years-15,000 total positions-by state and specialty to address workforce shortages in 2026. We use the model to identify shortages for 19 types of health care services provided by 35 specialties in 50 states. The new GME slots are allocated to nearly all specialties, but nine states and the District of Columbia do not receive any new positions. This analysis illustrates an objective, evidence-based methodology for allocating GME positions that could be used as the starting point for discussions about GME expansion or redistribution. © Health Research and Educational Trust.
An omnibus test for family-based association studies with multiple SNPs and multiple phenotypes.
Lasky-Su, Jessica; Murphy, Amy; McQueen, Matthew B; Weiss, Scott; Lange, Christoph
2010-06-01
We propose an omnibus family-based association test (MFBAT) that can be applied to multiple markers and multiple phenotypes and that has only one degree of freedom. The proposed test statistic extends current FBAT methodology to incorporate multiple markers as well as multiple phenotypes. Using simulation studies, power estimates for the proposed methodology are compared with the standard methodologies. On the basis of these simulations, we find that MFBAT substantially outperforms other methods, including haplotypic approaches and doing multiple tests with single single-nucleotide polymorphisms (SNPs) and single phenotypes. The practical relevance of the approach is illustrated by an application to asthma in which SNP/phenotype combinations are identified and reach overall significance that would not have been identified using other approaches. This methodology is directly applicable to cases in which there are multiple SNPs, such as candidate gene studies, cases in which there are multiple phenotypes, such as expression data, and cases in which there are multiple phenotypes and genotypes, such as genome-wide association studies that incorporate expression profiles as phenotypes. This program is available in the PBAT analysis package.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Web-based surveys as an alternative to traditional mail methods.
Fleming, Christopher M; Bowden, Mark
2009-01-01
Environmental economists have long used surveys to gather information about people's preferences. A recent innovation in survey methodology has been the advent of web-based surveys. While the Internet appears to offer a promising alternative to conventional survey administration modes, concerns exist over potential sampling biases associated with web-based surveys and the effect these may have on valuation estimates. This paper compares results obtained from a travel cost questionnaire of visitors to Fraser Island, Australia, that was conducted using two alternate survey administration modes; conventional mail and web-based. It is found that response rates and the socio-demographic make-up of respondents to the two survey modes are not statistically different. Moreover, both modes yield similar consumer surplus estimates.
Impact of voice- and knowledge-enabled clinical reporting--US example.
Bushko, Renata G; Havlicek, Penny L; Deppert, Edward; Epner, Stephen
2002-01-01
This study shows qualitative and quantitative estimates of the national and the clinic level impact of utilizing voice and knowledge enabled clinical reporting systems. Using common sense estimation methodology, we show that the delivery of health care can experience a dramatic improvement in four areas as a result of the broad use of voice and knowledge enabled clinical reporting: (1) Process Quality as measured by cost savings, (2) Organizational Quality as measured by compliance, (3) Clinical Quality as measured by clinical outcomes and (4) Service Quality as measured by patient satisfaction. If only 15 percent of US physicians replaced transcription with modem clinical reporting voice-based methodology, about one half billion dollars could be saved. $6.7 Billion could be saved annually if all medical reporting currently transcribed was handled with voice-and knowledge-enabled dictation and reporting systems.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Copula based prediction models: an application to an aortic regurgitation study
Kumar, Pranesh; Shoukri, Mohamed M
2007-01-01
Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
Development of regional stump-to-mill logging cost estimators
Chris B. LeDoux; John E. Baumgras
1989-01-01
Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Nadine Gobron; Bernard Pinty; Ophélie Aussedat; Jing M. Chen; Warren B. Cohen; Rasmus Fensholt; Valery Gond; Karl Fred Huemmrich; Thomas Lavergne; Frédéric Méline; Jeffrey L. Privette; Inge Sandholt; Malcolm Taberner; David P. Turner; Michael M. Verstraete; Jean-Luc Widlowski
2006-01-01
This paper discusses the quality and the accuracy of the Joint Research Center (JRC) fraction of absorbed photosynthetically active radiation (FAPAR) products generated from an analysis of Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data. The FAPAR value acts as an indicator of the presence and state of the vegetation and it can be estimated from remote sensing...
Initial retrieval sequence and blending strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemwell, D.L.; Grenard, C.E.
1996-09-01
This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.
Tunnel and Station Cost Methodology : Mined Tunnels
DOT National Transportation Integrated Search
1983-01-01
The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...
Tunnel and Station Cost Methodology Volume II: Stations
DOT National Transportation Integrated Search
1981-01-01
The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...
Genetic potential of black bean genotypes with predictable behaviors in multienvironment trials.
Torga, P P; Melo, P G S; Pereira, H S; Faria, L C; Melo, L C
2016-10-24
The aim of this study was to evaluate the phenotypic stability and specific and broad adaptability of common black bean genotypes for the Central and Center-South regions of Brazil by using the Annicchiarico and AMMI (weighted average of absolute scores: WAAS, and weighted average of absolute scores and productivity: WAASP) methodologies. We carried out 69 trials, with 43 and 26 trials in the Central and Center-South regions, respectively. Thirteen genotypes were evaluated in a randomized block design with three replications, during the rainy, dry, and winter seasons in 2 years. To obtain estimates of specific adaptation, we analyzed the parameters for each method obtained in the two geographic regions separately. To estimate broad adaptation, we used the average of the parameters obtained from each region. The lines identified with high specific adaptation in each region were not the same based on the Annicchiarico and AMMI (WAAS) methodologies. It was not possible to identify the same genotypes with specific or broad stability by using these methods. By contrast, the Annicchiarico and AMMI (WAASP) methods presented very similar estimates of broad and specific adaptation. Based on these methods, the lines with more specific adaptation were CNFP 8000 and CNFP 7994, in the Central and Center-South regions, respectively, of which the CNFP 8000 line was more widely adapted.
Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data
NASA Astrophysics Data System (ADS)
Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.
2011-11-01
Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Predicting Vessel Trajectories from Ais Data Using R
2017-06-01
future position at the expectation level set by the user, therefore producing a valid methodology for both estimating the future vessel location and... methodology for both estimating the future vessel location and for assessing anomalous vessel behavior. vi THIS PAGE INTENTIONALLY LEFT BLANK vii... methodology , that brings them one step closer to attaining these goals. A key idea in the current literature is that the series of vessel locations
Non-contact cardiac pulse rate estimation based on web-camera
NASA Astrophysics Data System (ADS)
Wang, Yingzhi; Han, Tailin
2015-12-01
In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.
Population-based absolute risk estimation with survey data
Kovalchik, Stephanie A.; Pfeiffer, Ruth M.
2013-01-01
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614
Computational Material Processing in Microgravity
NASA Technical Reports Server (NTRS)
2005-01-01
Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Regional-scale analysis of extreme precipitation from short and fragmented records
NASA Astrophysics Data System (ADS)
Libertino, Andrea; Allamano, Paola; Laio, Francesco; Claps, Pierluigi
2018-02-01
Rain gauge is the oldest and most accurate instrument for rainfall measurement, able to provide long series of reliable data. However, rain gauge records are often plagued by gaps, spatio-temporal discontinuities and inhomogeneities that could affect their suitability for a statistical assessment of the characteristics of extreme rainfall. Furthermore, the need to discard the shorter series for obtaining robust estimates leads to ignore a significant amount of information which can be essential, especially when large return periods estimates are sought. This work describes a robust statistical framework for dealing with uneven and fragmented rainfall records on a regional spatial domain. The proposed technique, named "patched kriging" allows one to exploit all the information available from the recorded series, independently of their length, to provide extreme rainfall estimates in ungauged areas. The methodology involves the sequential application of the ordinary kriging equations, producing a homogeneous dataset of synthetic series with uniform lengths. In this way, the errors inherent to any regional statistical estimation can be easily represented in the spatial domain and, possibly, corrected. Furthermore, the homogeneity of the obtained series, provides robustness toward local artefacts during the parameter-estimation phase. The application to a case study in the north-western Italy demonstrates the potential of the methodology and provides a significant base for discussing its advantages over previous techniques.
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
2016-09-28
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
Design optimization of gas generator hybrid propulsion boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight U.; Fink, Lawrence E.
1990-01-01
A methodology used in support of a contract study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specified optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
Quality and rigor of the concept mapping methodology: a pooled study analysis.
Rosas, Scott R; Kane, Mary
2012-05-01
The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative characteristics and estimates of quality and rigor that may guide for future studies are lacking. To address this gap, we conducted a pooled analysis of 69 concept mapping studies to describe characteristics across study phases, generate specific indicators of validity and reliability, and examine the relationship between select study characteristics and quality indicators. Individual study characteristics and estimates were pooled and quantitatively summarized, describing the distribution, variation and parameters for each. In addition, variation in the concept mapping data collection in relation to characteristics and estimates was examined. Overall, results suggest concept mapping yields strong internal representational validity and very strong sorting and rating reliability estimates. Validity and reliability were consistently high despite variation in participation and task completion percentages across data collection modes. The implications of these findings as a practical reference to assess the quality and rigor for future concept mapping studies are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Methodology and Implications of Maximum Paleodischarge Estimates for
Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.
NASA Astrophysics Data System (ADS)
Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.
2016-06-01
This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.
Roberge, Jason; O'Rourke, Mary Kay; Meza-Montenegro, Maria Mercedes; Gutiérrez-Millán, Luis Enrique; Burgess, Jefferey L; Harris, Robin B
2012-04-01
The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic) and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L) whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001), urinary inorganic arsenic concentration (p < 0.001), and urinary sum of species (p < 0.001). Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.
Roberge, Jason; O’Rourke, Mary Kay; Meza-Montenegro, Maria Mercedes; Gutiérrez-Millán, Luis Enrique; Burgess, Jefferey L.; Harris, Robin B.
2012-01-01
The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic) and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L) whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001), urinary inorganic arsenic concentration (p < 0.001), and urinary sum of species (p < 0.001). Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated. PMID:22690182
Magnetic resonance fingerprinting based on realistic vasculature in mice
Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric
2017-01-01
Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909
Building Loss Estimation for Earthquake Insurance Pricing
NASA Astrophysics Data System (ADS)
Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.
2005-12-01
After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.
NASA Astrophysics Data System (ADS)
Humber, M. L.; Copati, E.; Sanchez, A.; Sahajpal, R.; Puricelli, E.; Becker-Reshef, I.
2017-12-01
Accurate crop production data is fundamental for reducing uncertainly and volatility in the domestic and international agricultural markets. The Agricultural Estimates Department of the Buenos Aires Grain Exchange has worked since 2000 on the estimation of different crop production data. With this information, the Grain Exchange helps different actors of the agricultural chain, such as producers, traders, seed companies, market analyst, policy makers, into their day to day decision making. Since 2015/16 season, the Grain Exchange has worked on the development of a new earth observations-based method to identify winter crop planted area at a regional scale with the aim of improving crop production estimates. The objective of this new methodology is to create a reliable winter crop mask at moderate spatial resolution using Landsat-8 imagery by exploiting bi-temporal differences in the phenological stages of winter crops as compared to other landcover types. In collaboration with the University of Maryland, the map has been validated by photointerpretation of a stratified statistically random sample of independent ground truth data in the four largest producing provinces of Argentina: Buenos Aires, Cordoba, La Pampa, and Santa Fe. In situ measurements were also used to further investigate conditions in the Buenos Aires province. Preliminary results indicate that while there are some avenues for improvement, overall the classification accuracy of the cropland and non-cropland classes are sufficient to improve downstream production estimates. Continuing research will focus on improving the methodology for winter crop mapping exercises on a yearly basis as well as improving the sampling methodology to optimize collection of validation data in the future.
NASA Astrophysics Data System (ADS)
Shiri, Jalal
2018-06-01
Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.
Soil moisture retrival from Sentinel-1 and Modis synergy
NASA Astrophysics Data System (ADS)
Gao, Qi; Zribi, Mehrez; Escorihuela, Maria Jose; Baghdadi, Nicolas
2017-04-01
This study presents two methodologies retrieving soil moisture from SAR remote sensing data. The study is based on Sentinel-1 data in the VV polarization, over a site in Urgell, Catalunya (Spain). In the two methodologies using change detection techniques, preprocessed radar data are combined with normalized difference vegetation index (NDVI) auxiliary data to estimate the mean soil moisture with a resolution of 1km. By modeling the relationship between the backscatter difference and NDVI, the soil moisture at a specific NDVI value is retrieved. The first algorithm is already developed on West Africa(Zribi et al., 2014) from ERS scatterometer data to estimate soil water status. In this study, it is adapted to Sentinel-1 data and take into account the high repetitiveness of data in optimizing the inversion approach. Another new method is developed based on the backscatter difference between two adjacent days of Sentinel-1 data w.r.t. NDVI, with smaller vegetation change, the backscatter difference is more sensitive to soil moisture. The proposed methodologies have been validated with the ground measurement in two demonstrative fields with RMS error about 0.05 (in volumetric moisture), and the coherence between soil moisture variations and rainfall events is observed. Soil moisture maps at 1km resolution are generated for the study area. The results demonstrate the potential of Sentinel-1 data for the retrieval of soil moisture at 1km or even better resolution.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Demonstration of line transect methodologies to estimate urban gray squirrel density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hein, E.W.
1997-11-01
Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Validating a new methodology for strain estimation from cardiac cine MRI
NASA Astrophysics Data System (ADS)
Elnakib, Ahmed; Beache, Garth M.; Gimel'farb, Georgy; Inanc, Tamer; El-Baz, Ayman
2013-10-01
This paper focuses on validating a novel framework for estimating the functional strain from cine cardiac magnetic resonance imaging (CMRI). The framework consists of three processing steps. First, the left ventricle (LV) wall borders are segmented using a level-set based deformable model. Second, the points on the wall borders are tracked during the cardiac cycle based on solving the Laplace equation between the LV edges. Finally, the circumferential and radial strains are estimated at the inner, mid-wall, and outer borders of the LV wall. The proposed framework is validated using synthetic phantoms of the material strains that account for the physiological features and the LV response during the cardiac cycle. Experimental results on simulated phantom images confirm the accuracy and robustness of our method.
Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria
2013-01-01
Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence estimates of cannabis use that are based on a common survey methodology. Search engine query data are a too weak indicator to base prevalence estimations on this source only, but in combination with other sources (waste water analysis, sales of cigarette paper) they may provide satisfactory estimates. Copyright © 2012. Published by Elsevier B.V.
Andres, R. J. [Carbon Dioxide Information Analysis Center Environmental Sciences Division Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6290 U.S.A.; Boden, T. A. [Carbon Dioxide Information Analysis Center Environmental Sciences Division Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6290 U.S.A.; Marland, G. [Research Institute for Environment, Energy, and Economics Appalachian State University Boone, NC 28608-2131 USA
2010-01-01
The basic data provided in these data files are derived from time series of Global, Regional, and National Fossil-Fuel CO2 Emissions (http://cdiac.ess-dive.lbl.gov/trends/emis/overview_2013.html), the references therein, and the methodology described in Andres et al. (2011). The data accessible here take these tabular, national, mass-emissions data, multiply them by stable carbon isotopic signature (del 13C) as described in Andres et al. (2000), and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996) for years prior to 1990 and a variable population distribution for later years (Andres et al. 2016). Note that the mass-emissions data used here are based on fossil-fuel consumption estimates as these are more representative of within country emissions than fossil-fuel production estimates (see http://cdiac.ess-dive.lbl.gov/faq.html#Q10 for a description why emission totals based upon consumption differ from those based upon production). The monthly, isotopic (δ 13C) fossil-fuel CO2 emissions estimates from 1950-2013 provided in this database are derived from time series of global, regional, and national fossil-fuel CO2 emissions (Boden et al. 2016), the references therein, and the methodology described in Andres et al. (2011). The data accessible here take these tabular, national, mass-emissions data, multiply them by stable carbon isotopic signatures (δ 13C) as described in Andres et al. (2000), and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). Note that the mass-emissions data used here are based on fossil-fuel consumption estimates as these are more representative of within country emissions than fossil-fuel production estimates (see http://cdiac.ess-dive.lbl.gov/faq.html#Q10 for a description why emission totals based upon consumption differ from those based upon production).
Multidimensional Poverty in China: Findings Based on the CHNS
ERIC Educational Resources Information Center
Yu, Jiantuo
2013-01-01
This paper estimates multidimensional poverty in China by applying the Alkire-Foster methodology to the China Health and Nutrition Survey 2000-2009 data. Five dimensions are included: income, living standard, education, health and social security. Results suggest that rapid economic growth has resulted not only in a reduction in income poverty but…
Assessment of undiscovered oil and gas resources of the Paris Basin, France, 2015
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Le, Phoung A.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers, Heidi M.
2015-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 222 million barrels of unconventional oil; 2,092 billion cubic feet of unconventional gas; 18 million barrels of conventional oil; and 47 billion cubic feet of conventional gas resources in the Paris Basin of France.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.; Pollastro, Richard M.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 5.8 billion barrels of oil and 115 trillion cubic feet of undiscovered natural gas in five geologic provinces in the areas of Papua New Guinea, eastern Indonesia, and East Timor.
Core Issues: Reflections on Sustainability in Australian University Coursework Programs
ERIC Educational Resources Information Center
Sherren, Kate
2006-01-01
Purpose: In addition to mapping the consistency in rhetoric behind education for sustainability, despite changes in jargon over the past 30 years, this paper endeavours to estimate the degree to which these ideas have been integrated into the Australian tertiary sector. Design/methodology/approach: The results of a recent internet-based audit of…
The Field Production of Water for Injection
1985-12-01
L/day Bedridden Patient 0.75 L/day Average Diseased Patient 0.50 L/day e (There is no feasible methodology to forecast the number of procedures per... Bedridden Patient 0.75 All Diseased Patients 0.50 An estimate of the liters/day needed may be calculated based on a forecasted patient stream, including
Klett, Timothy R.; Schenk, Christopher J.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Mercier, Tracey J.; Pitman, Janet K.; Tennyson, Marilyn E.
2016-11-10
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean continuous resources of 12 billion barrels of oil and 75 trillion cubic feet of gas in the Bazhenov Formation of the West Siberian Basin Province, Russia.
Marra, Kristen R.; Gaswirth, Stephanie B.; Schenk, Christopher J.; Leathers-Miller, Heidi M.; Klett, Timothy R.; Mercier, Tracey J.; Le, Phuong A.; Tennyson, Marilyn E.; Finn, Thomas M.; Hawkins, Sarah J.; Brownfield, Michael E.
2017-05-15
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean resources of 4.2 billion barrels of oil and 3.1 trillion cubic feet of gas in the Spraberry Formation of the Midland Basin, Permian Basin Province, Texas.
Assessment of Permian tight oil and gas resources in the Junggar basin of China, 2016
Potter, Christopher J.; Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Gaswirth, Stephanie B.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Brownfield, Michael E.; Pitman, Janet K.; Mercier, Tracey J.; Le, Phuong A.; Drake, Ronald M.
2017-04-05
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 764 million barrels of oil and 3.5 trillion cubic feet of gas in tight reservoirs in the Permian Lucaogou Formation in the Junggar basin of northwestern China.
Preliminary results of the global forest biomass survey
S. Healey; E. Lindquist
2014-01-01
Many countries do not yet have well-established national forest inventories, and among those that do, significant methodological differences exist, particularly in the estimation of standing forest biomass. Global space-based LiDAR (Light Detection and Ranging) from NASAâs now-completed ICESat mission provided consistent, high-quality measures of canopy height and...
Analysing the Preferences of Prospective Students for Higher Education Institution Attributes
ERIC Educational Resources Information Center
Walsh, Sharon; Flannery, Darragh; Cullinan, John
2018-01-01
We utilise a dataset of students in their final year of upper secondary education in Ireland to provide a detailed examination of the preferences of prospective students for higher education institutions (HEIs). Our analysis is based upon a discrete choice experiment methodology with willingness to pay estimates derived for specific HEI attributes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, J. -J.; Chang, Y. -S.; Hartmann, H.
2013-09-01
This report presents a general methodology for obtaining preliminary estimates of the potential human health risks associated with developing a utility-scale solar energy facility on a contaminated site, based on potential exposures to contaminants in soils (including transport of those contaminants into the air).
Organizational Approach to the Ergonomic Examination of E-Learning Modules
ERIC Educational Resources Information Center
Lavrov, Evgeniy; Kupenko, Olena; Lavryk, Tetiana; Barchenko, Natalia
2013-01-01
With a significant increase in the number of e-learning resources the issue of quality is of current importance. An analysis of existing scientific and methodological literature shows the variety of approaches, methods and tools to evaluate e-learning materials. This paper proposes an approach based on the procedure for estimating parameters of…
Methodical Bases for the Regional Information Potential Estimation
ERIC Educational Resources Information Center
Ashmarina, Svetlana I.; Khasaev, Gabibulla R.; Mantulenko, Valentina V.; Kasarin, Stanislav V.; Dorozhkin, Evgenij M.
2016-01-01
The relevance of the investigated problem is caused by the need to assess the implementation of informatization level of the region and the insufficient development of the theoretical, content-technological, scientific and methodological aspects of the assessment of the region's information potential. The aim of the research work is to develop a…
DOT National Transportation Integrated Search
1996-04-01
THE STUDY INVESTIGATES THE APPLICATION OF SIMULATION ALONG WITH FIELD OBSERVATIONS FOR ESTIMATION OF EXCLUSIVE LEFT-TURN SATURATION FLOW RATE AND CAPACITY. THE ENTIRE RESEARCH HAS COVERED THE FOLLOWING PRINCIPAL SUBJECTS: (1) A SATURATION FLOW MODEL ...
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
Schenk, Christopher J.; Mercier, Tracey J.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Finn, Thomas M.; Pitman, Janet K.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Klett, Timothy R.; Leathers-Miller, Heidi M.
2018-04-13
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 198 billion cubic feet of continuous gas in the Phosphoria Formation of the Wyoming Thrust Belt Province, Wyoming, Idaho, and Utah.
In general, the accuracy of a predicted toxicity value increases with increase in similarity between the query chemical and the chemicals used to develop a QSAR model. A toxicity estimation methodology employing this finding has been developed. A hierarchical based clustering t...
Assessment of unconventional oil and gas resources in Northeast Mexico, 2014
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Brownfield, Michael E.; Pawlewicz, Mark J.; Weaver, Jean Noe
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 0.78 billion barrels of unconventional oil, 23.5 trillion cubic feet of unconventional gas, and 0.88 billion barrels of natural gas liquids in the Sabinas Basin, Burgos Basin, and Tampico-Misantla Basin provinces of northeast Mexico.