Sample records for estimate remaining component

  1. Real-Time Aircraft Engine-Life Monitoring

    NASA Technical Reports Server (NTRS)

    Klein, Richard

    2014-01-01

    This project developed an inservice life-monitoring system capable of predicting the remaining component and system life of aircraft engines. The embedded system provides real-time, inflight monitoring of the engine's thrust, exhaust gas temperature, efficiency, and the speed and time of operation. Based upon this data, the life-estimation algorithm calculates the remaining life of the engine components and uses this data to predict the remaining life of the engine. The calculations are based on the statistical life distribution of the engine components and their relationship to load, speed, temperature, and time.

  2. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  3. A review of the challenges and opportunities in estimating above ground forest biomass using tree-level models

    Treesearch

    Hailemariam Temesgen; David Affleck; Krishna Poudel; Andrew Gray; John Sessions

    2015-01-01

    Accurate biomass measurements and analyses are critical components in quantifying carbon stocks and sequestration rates, assessing potential impacts due to climate change, locating bio-energy processing plants, and mapping and planning fuel treatments. To this end, biomass equations will remain a key component of future carbon measurements and estimation. As...

  4. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  5. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  6. Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.

    2014-01-01

    Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.

  7. Temperature environment for 9975 packages stored in KAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daugherty, W. L.

    Plutonium materials are stored in the K Area Complex (KAC) in shipping packages, typically the 9975 shipping package. In order to estimate realistic degradation rates for components within the shipping package (i.e. the fiberboard overpack and O-ring seals), it is necessary to understand actual facility temperatures, which can vary daily and seasonally. Relevant facility temperature data available from several periods throughout its operating history have been reviewed. The annual average temperature within the Crane Maintenance Area has ranged from approximately 70 to 74 °F, although there is significant seasonal variation and lesser variation among different locations within the facility. Themore » long-term average degradation rate for 9975 package components is very close to that expected if the component were to remain continually at the annual average temperature. This result remains valid for a wide range of activation energies (which describes the variation in degradation rate as the temperature changes), if the activation energy remains constant over the seasonal range of component temperatures. It is recommended that component degradation analyses and service life estimates incorporate these results. Specifically, it is proposed that future analyses assume an average facility ambient air temperature of 94 °F. This value is bounding for all packages, and includes margin for several factors such as increased temperatures within the storage arrays, the addition of more packages in the future, and future operational changes.« less

  8. Characterization of shrubland ecosystem components as continuous fields in the northwest United States

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.; Rigge, Matthew B.; Shi, Hua; Meyer, Debbie

    2015-01-01

    Accurate and consistent estimates of shrubland ecosystem components are crucial to a better understanding of ecosystem conditions in arid and semiarid lands. An innovative approach was developed by integrating multiple sources of information to quantify shrubland components as continuous field products within the National Land Cover Database (NLCD). The approach consists of several procedures including field sample collections, high-resolution mapping of shrubland components using WorldView-2 imagery and regression tree models, Landsat 8 radiometric balancing and phenological mosaicking, medium resolution estimates of shrubland components following different climate zones using Landsat 8 phenological mosaics and regression tree models, and product validation. Fractional covers of nine shrubland components were estimated: annual herbaceous, bare ground, big sagebrush, herbaceous, litter, sagebrush, shrub, sagebrush height, and shrub height. Our study area included the footprint of six Landsat 8 scenes in the northwestern United States. Results show that most components have relatively significant correlations with validation data, have small normalized root mean square errors, and correspond well with expected ecological gradients. While some uncertainties remain with height estimates, the model formulated in this study provides a cross-validated, unbiased, and cost effective approach to quantify shrubland components at a regional scale and advances knowledge of horizontal and vertical variability of these components.

  9. Physics-of-Failure Approach to Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  10. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  11. A review on prognostics approaches for remaining useful life of lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Su, C.; Chen, H. J.

    2017-11-01

    Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.

  12. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  13. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  14. A Distributed Approach to System-Level Prognostics

    DTIC Science & Technology

    2012-09-01

    the end of (useful) life ( EOL ) and/or the remaining useful life (RUL) of components, subsystems, or systems. The prognostics problem itself can be...system state estimate, computes EOL and/or RUL. In this paper, we focus on a model-based prognostics approach (Orchard & Vachtse- vanos, 2009; Daigle...been focused on individual components, and determining their EOL and RUL, e.g., (Orchard & Vachtsevanos, 2009; Saha & Goebel, 2009; Daigle & Goebel

  15. The vertical, the horizontal and the rest: anatomy of the middle cohomology of Calabi-Yau fourfolds and F-theory applications

    NASA Astrophysics Data System (ADS)

    Braun, A. P.; Watari, T.

    2015-01-01

    The four-form field strength in F-theory compactifications on Calabi-Yau four-folds takes its value in the middle cohomology group H 4. The middle cohomology is decomposed into a vertical, a horizontal and a remaining component, all three of which are present in general. We argue that a flux along the remaining or vertical component may break some symmetry, while a purely horizontal flux does not influence the unbroken part of the gauge group or the net chirality of charged matter fields. This makes the decomposition crucial to the counting of flux vacua in the context of F-theory GUTs. We use mirror symmetry to derive a combinatorial formula for the dimensions of these components applicable to any toric Calabi-Yau hypersurface, and also make a partial attempt at providing a geometric characterization of the four-cycles Poincaré dual to the remaining component of H 4. It is also found in general elliptic Calabi-Yau fourfolds supporting SU(5) gauge symmetry that a remaining component can be present, for example, in a form crucial to the symmetry breaking SU(5) - → SU(3) C × SU(2) L × U(1) Y . The dimension of the horizontal component is used to derive an estimate of the statistical distribution of the number of generations and the rank of 7-brane gauge groups in the landscape of F-theory flux vacua.

  16. Reconciling Land-Ocean Moisture Transport Variability in Reanalyses with P-ET in Observationally-Driven Land Surface Models

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Bosilovich, Michael G.; Roberts, Jason B.

    2016-01-01

    Vertically integrated atmospheric moisture transport from ocean to land [vertically integrated atmospheric moisture flux convergence (VMFC)] is a dynamic component of the global climate system but remains problematic in atmospheric reanalyses, with current estimates having significant multidecadal global trends differing even in sign. Continual evolution of the global observing system, particularly stepwise improvements in satellite observations, has introduced discrete changes in the ability of data assimilation to correct systematic model biases, manifesting as nonphysical variability. Land surface models (LSMs) forced with observed precipitation P and near-surface meteorology and radiation provide estimates of evapotranspiration (ET). Since variability of atmospheric moisture storage is small on interannual and longer time scales, VMFC equals P minus ET is a good approximation and LSMs can provide an alternative estimate. However, heterogeneous density of rain gauge coverage, especially the sparse coverage over tropical continents, remains a serious concern. Rotated principal component analysis (RPCA) with prefiltering of VMFC to isolate the artificial variability is used to investigate artifacts in five reanalysis systems. This procedure, although ad hoc, enables useful VMFC corrections over global land. The P minus ET estimates from seven different LSMs are evaluated and subsequently used to confirm the efficacy of the RPCA-based adjustments. Global VMFC trends over the period 1979-2012 ranging from 0.07 to minus 0.03 millimeters per day per decade are reduced by the adjustments to 0.016 millimeters per day per decade, much closer to the LSM P minus ET estimate (0.007 millimeters per day per decade). Neither is significant at the 90 percent level. ENSO (El Nino-Southern Oscillation)-related modulation of VMFC and P minus ET remains the largest global interannual signal, with mean LSM and adjusted reanalysis time series correlating at 0.86.

  17. Integration and Assessment of Component Health Prognostics in Supervisory Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Bonebrake, Christopher A.; Dib, Gerges

    Enhanced risk monitors (ERMs) for active components in advanced reactor concepts use predictive estimates of component failure to update, in real time, predictive safety and economic risk metrics. These metrics have been shown to be capable of use in optimizing maintenance scheduling and managing plant maintenance costs. Integrating this information with plant supervisory control systems increases the potential for making control decisions that utilize real-time information on component conditions. Such decision making would limit the possibility of plant operations that increase the likelihood of degrading the functionality of one or more components while maintaining the overall functionality of the plant.more » ERM uses sensor data for providing real-time information about equipment condition for deriving risk monitors. This information is used to estimate the remaining useful life and probability of failure of these components. By combining this information with plant probabilistic risk assessment models, predictive estimates of risk posed by continued plant operation in the presence of detected degradation may be estimated. In this paper, we describe this methodology in greater detail, and discuss its integration with a prototypic software-based plant supervisory control platform. In order to integrate these two technologies and evaluate the integrated system, software to simulate the sensor data was developed, prognostic models for feedwater valves were developed, and several use cases defined. The full paper will describe these use cases, and the results of the initial evaluation.« less

  18. Empirical estimates to reduce modeling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges

    USGS Publications Warehouse

    Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.

    2013-01-01

    The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.

  19. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    PubMed

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  20. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  1. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  2. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  3. A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction.

    PubMed

    Yu, Nannan; Wu, Lingling; Zou, Dexuan; Chen, Ying; Lu, Hanbing

    2017-01-01

    In this paper, we propose a novel method for solving the single-trial evoked potential (EP) estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX). The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.

  4. Human Exposure Estimates and Oral Equivalents of In Vitro Bioactivity for Prioritizing, Monitoring and Testing of Environmental Chemicals

    EPA Science Inventory

    High-throughput, lower-cost, in vitro toxicity testing is currently being evaluated for use in prioritization and eventually for predicting in vivo toxicity. Interpreting in vitro data in the context of in vivo human relevance remains a formidable challenge. A key component in us...

  5. Genepleio software for effective estimation of gene pleiotropy from protein sequences.

    PubMed

    Chen, Wenhai; Chen, Dandan; Zhao, Ming; Zou, Yangyun; Zeng, Yanwu; Gu, Xun

    2015-01-01

    Though pleiotropy, which refers to the phenomenon of a gene affecting multiple traits, has long played a central role in genetics, development, and evolution, estimation of the number of pleiotropy components remains a hard mission to accomplish. In this paper, we report a newly developed software package, Genepleio, to estimate the effective gene pleiotropy from phylogenetic analysis of protein sequences. Since this estimate can be interpreted as the minimum pleiotropy of a gene, it is used to play a role of reference for many empirical pleiotropy measures. This work would facilitate our understanding of how gene pleiotropy affects the pattern of genotype-phenotype map and the consequence of organismal evolution.

  6. Study of compact radio sources using interplanetary scintillations at 111 MHz. The Pearson-Readhead sample

    NASA Astrophysics Data System (ADS)

    Tyul'Bashev, S. A.

    2009-01-01

    A complete sample of radio sources has been studied using the interplanetary scintillation method. In total, 32 sources were observed, with scintillations detected in 12 of them. The remaining sources have upper limits for the flux densities of their compact components. Integrated flux densities are estimated for 18 sources.

  7. Quantifying pteridines in the heads of blow flies (Diptera: Calliphoridae): Application for forensic entomology.

    PubMed

    Cammack, J A; Reiskind, M H; Guisewite, L M; Denning, S S; Watson, D W

    2017-11-01

    In forensic cases involving entomological evidence, establishing the postcolonization interval (post-CI) is a critical component of the investigation. Traditional methods of estimating the post-CI rely on estimating the age of immature blow flies (Diptera: Calliphoridae) collected from remains. However, in cases of delayed discovery (e.g., when remains are located indoors), these insects may have completed their development and be present in the environment as adults. Adult fly collections are often ignored in cases of advanced decomposition because of a presumed little relevance to the investigation; herein we present information on how these insects can be of value. In this study we applied an age-grading technique to estimate the age of adults of Chrysomya megacephala (Fabricius), Cochliomyia macellaria (Fabricius), and Phormia regina (Meigen), based on the temperature-dependent accumulation of pteridines in the compound eyes, when reared at temperatures ranging from 5 to 35°C. Age could be estimated for all species*sex*rearing temperature combinations (mean r 2 ±SE: 0.90±0.01) for all but P. regina reared at 5.4°C. These models can be used to increase the precision of post-CI estimates for remains found indoors, and the high r 2 values of 22 of the 24 regression equations indicates that this is a valid method for estimating the age of adult blow flies at temperatures ≥15°C. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. MSE-impact of PPP-RTK ZTD estimation strategies

    NASA Astrophysics Data System (ADS)

    Wang, K.; Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    In PPP-RTK network processing, the wet component of the zenith tropospheric delay (ZTD) cannot be precisely modelled and thus remains unknown in the observation equations. For small networks, the tropospheric mapping functions of different stations to a given satellite are almost equal to each other, thereby causing a near rank-deficiency between the ZTDs and satellite clocks. The stated near rank-deficiency can be solved by estimating the wet ZTD components relatively to that of the reference receiver, while the wet ZTD component of the reference receiver is constrained to zero. However, by increasing network scale and humidity around the reference receiver, enlarged mismodelled effects could bias the network and the user solutions. To consider both the influences of the noise and the biases, the mean-squared errors (MSEs) of different network and user parameters are studied analytically employing both the ZTD estimation strategies. We conclude that for a certain set of parameters, the difference in their MSE structures using both strategies is only driven by the square of the reference wet ZTD component and the formal variance of its solution. Depending on the network scale and the humidity condition around the reference receiver, the ZTD estimation strategy that delivers more accurate solutions might be different. Simulations are performed to illustrate the conclusions made by analytical studies. We find that estimating the ZTDs relatively in large networks and humid regions (for the reference receiver) could significantly degrade the network ambiguity success rates. Using ambiguity-fixed network-derived PPP-RTK corrections, for networks with an inter-station distance within 100 km, the choices of the ZTD estimation strategy is not crucial for single-epoch ambiguity-fixed user positioning. Using ambiguity-float network corrections, for networks with inter-station distances of 100, 300 and 500 km in humid regions (for the reference receiver), the root-mean-squared errors (RMSEs) of the estimated user coordinates using relative ZTD estimation could be higher than those under the absolute case with differences up to millimetres, centimetres and decimetres, respectively.

  9. Unauthorized Immigration to the United States: Annual Estimates and Components of Change, by State, 1990 to 2010.

    PubMed

    Warren, Robert; Warren, John Robert

    2013-06-01

    We describe a method for producing annual estimates of the unauthorized immigrant population in the United Sates and components of population change, for each state and D.C., for 1990 to 2010. We quantify a sharp drop in the number of unauthorized immigrants arriving since 2000, and we demonstrate the role of departures from the population (emigration, adjustment to legal status, removal by the Department of Homeland Security (DHS), and deaths) in reducing population growth from one million in 2000 to population losses in 2008 and 2009. The number arriving in the U.S. peaked at more than one million in 1999 to 2001, and then declined rapidly through 2009. We provide evidence that population growth stopped after 2007 primarily because entries declined and not because emigration increased during the economic crisis. Our estimates of the total unauthorized immigrant population in the U.S. and in the top ten states are comparable to those produced by DHS and the Pew Hispanic Center. For the remaining states and D.C., our data and methods produce estimates with smaller ranges of sampling error.

  10. Structural Definition and Mass Estimation of Lunar Surface Habitats for the Lunar Architecture Team Phase 2 (LAT-2) Study

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Wu, K, Chauncey; Smith, Russell W.

    2008-01-01

    The Lunar Architecture Team Phase 2 study defined and assessed architecture options for a Lunar Outpost at the Moon's South Pole. The Habitation Focus Element Team was responsible for developing concepts for all of the Habitats and pressurized logistics modules particular to each of the architectures, and defined the shapes, volumes and internal layouts considering human factors, surface operations and safety requirements, as well as Lander mass and volume constraints. The Structures Subsystem Team developed structural concepts, sizing estimates and mass estimates for the primary Habitat structure. In these studies, the primary structure was decomposed into a more detailed list of components to be sized to gain greater insight into concept mass contributors. Structural mass estimates were developed that captured the effect of major design parameters such as internal pressure load. Analytical and empirical equations were developed for each structural component identified. Over 20 different hard-shell, hybrid expandable and inflatable soft-shell Habitat and pressurized logistics module concepts were sized and compared to assess structural performance and efficiency during the study. Habitats were developed in three categories; Mini Habs that are removed from the Lander and placed on the Lunar surface, Monolithic habitats that remain on the Lander, and Habitats that are part of the Mobile Lander system. Each category of Habitat resulted in structural concepts with advantages and disadvantages. The same modular shell components could be used for the Mini Hab concept, maximizing commonality and minimizing development costs. Larger Habitats had higher volumetric mass efficiency and floor area than smaller Habitats (whose mass was dominated by fixed items such as domes and frames). Hybrid and pure expandable Habitat structures were very mass-efficient, but the structures technology is less mature, and the ability to efficiently package and deploy internal subsystems remains an open issue.

  11. Genetic and environmental contributions to body mass index: comparative analysis of monozygotic twins, dizygotic twins and same-age unrelated siblings.

    PubMed

    Segal, N L; Feng, R; McGuire, S A; Allison, D B; Miller, S

    2009-01-01

    Earlier studies have established that a substantial percentage of variance in obesity-related phenotypes is explained by genetic components. However, only one study has used both virtual twins (VTs) and biological twins and was able to simultaneously estimate additive genetic, non-additive genetic, shared environmental and unshared environmental components in body mass index (BMI). Our current goal was to re-estimate four components of variance in BMI, applying a more rigorous model to biological and virtual multiples with additional data. Virtual multiples share the same family environment, offering unique opportunities to estimate common environmental influence on phenotypes that cannot be separated from the non-additive genetic component using only biological multiples. Data included 929 individuals from 164 monozygotic twin pairs, 156 dizygotic twin pairs, five triplet sets, one quadruplet set, 128 VT pairs, two virtual triplet sets and two virtual quadruplet sets. Virtual multiples consist of one biological child (or twins or triplets) plus one same-aged adoptee who are all raised together since infancy. We estimated the additive genetic, non-additive genetic, shared environmental and unshared random components in BMI using a linear mixed model. The analysis was adjusted for age, age(2), age(3), height, height(2), height(3), gender and race. Both non-additive genetic and common environmental contributions were significant in our model (P-values<0.0001). No significant additive genetic contribution was found. In all, 63.6% (95% confidence interval (CI) 51.8-75.3%) of the total variance of BMI was explained by a non-additive genetic component, 25.7% (95% CI 13.8-37.5%) by a common environmental component and the remaining 10.7% by an unshared component. Our results suggest that genetic components play an essential role in BMI and that common environmental factors such as diet or exercise also affect BMI. This conclusion is consistent with our earlier study using a smaller sample and shows the utility of virtual multiples for separating non-additive genetic variance from common environmental variance.

  12. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  13. Unauthorized Immigration to the United States: Annual Estimates and Components of Change, by State, 1990 to 2010

    PubMed Central

    Warren, Robert; Warren, John Robert

    2013-01-01

    We describe a method for producing annual estimates of the unauthorized immigrant population in the United Sates and components of population change, for each state and D.C., for 1990 to 2010. We quantify a sharp drop in the number of unauthorized immigrants arriving since 2000, and we demonstrate the role of departures from the population (emigration, adjustment to legal status, removal by the Department of Homeland Security (DHS), and deaths) in reducing population growth from one million in 2000 to population losses in 2008 and 2009. The number arriving in the U.S. peaked at more than one million in 1999 to 2001, and then declined rapidly through 2009. We provide evidence that population growth stopped after 2007 primarily because entries declined and not because emigration increased during the economic crisis. Our estimates of the total unauthorized immigrant population in the U.S. and in the top ten states are comparable to those produced by DHS and the Pew Hispanic Center. For the remaining states and D.C., our data and methods produce estimates with smaller ranges of sampling error. PMID:23956482

  14. Machine remaining useful life prediction: An integrated adaptive neuro-fuzzy and high-order particle filtering approach

    NASA Astrophysics Data System (ADS)

    Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

    2012-04-01

    Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

  15. A physics-based algorithm for the estimation of bearing spall width using vibrations

    NASA Astrophysics Data System (ADS)

    Kogan, G.; Klein, R.; Bortman, J.

    2018-05-01

    Evaluation of the damage severity in a mechanical system is required for the assessment of its remaining useful life. In rotating machines, bearings are crucial components. Hence, the estimation of the size of spalls in bearings is important for prognostics of the remaining useful life. Recently, this topic has been extensively studied and many of the methods used for the estimation of spall size are based on the analysis of vibrations. A new tool is proposed in the current study for the estimation of the spall width on the outer ring raceway of a rolling element bearing. The understanding and analysis of the dynamics of the rolling element-spall interaction enabled the development of a generic and autonomous algorithm. The algorithm is generic in the sense that it does not require any human interference to make adjustments for each case. All of the algorithm's parameters are defined by analytical expressions describing the dynamics of the system. The required conditions, such as sampling rate, spall width and depth, defining the feasible region of such algorithms, are analyzed in the paper. The algorithm performance was demonstrated with experimental data for different spall widths.

  16. Determining Occurrence Dynamics when False Positives Occur: Estimating the Range Dynamics of Wolves from Public Survey Data.

    PubMed

    Miller, David A W; Nichols, James D; Gude, Justin A; Rich, Lindsey N; Podruzny, Kevin M; Hines, James E; Mitchell, Michael S

    2013-01-01

    Large-scale presence-absence monitoring programs have great promise for many conservation applications. Their value can be limited by potential incorrect inferences owing to observational errors, especially when data are collected by the public. To combat this, previous analytical methods have focused on addressing non-detection from public survey data. Misclassification errors have received less attention but are also likely to be a common component of public surveys, as well as many other data types. We derive estimators for dynamic occupancy parameters (extinction and colonization), focusing on the case where certainty can be assumed for a subset of detections. We demonstrate how to simultaneously account for non-detection (false negatives) and misclassification (false positives) when estimating occurrence parameters for gray wolves in northern Montana from 2007-2010. Our primary data source for the analysis was observations by deer and elk hunters, reported as part of the state's annual hunter survey. This data was supplemented with data from known locations of radio-collared wolves. We found that occupancy was relatively stable during the years of the study and wolves were largely restricted to the highest quality habitats in the study area. Transitions in the occupancy status of sites were rare, as occupied sites almost always remained occupied and unoccupied sites remained unoccupied. Failing to account for false positives led to over estimation of both the area inhabited by wolves and the frequency of turnover. The ability to properly account for both false negatives and false positives is an important step to improve inferences for conservation from large-scale public surveys. The approach we propose will improve our understanding of the status of wolf populations and is relevant to many other data types where false positives are a component of observations.

  17. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  18. Estimating the stability of electrical conductivity of filled polymers under the influence of negative temperatures

    NASA Astrophysics Data System (ADS)

    Minakova, N. N.; Ushakov, V. Ya.

    2017-12-01

    One of the key problems in modern materials technology is synthesis of materials for electrotechnical devices capable of operating under severe conditions. Electrical and power engineering, in particular, demands for electrically conductive composite materials operating at high and low temperatures, various mechanical loads, electric fields, etc. Chaotic arrangement of electrically conductive component in the matrix and its structural and geometrical inhomogeneity can increase the local electric and thermal energy flux densities up to critical values even when their average values remain moderate. Elastomers filled with technical carbon being a promising component for electrotechnical devices was chosen as an object of study.

  19. Estimated stocks of circumpolar permafrost carbon with quantified uncertainty ranges and identified data gaps

    DOE PAGES

    Hugelius, Gustaf; Strauss, J.; Zubrzycki, S.; ...

    2014-12-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but SOC stock estimates were poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of permafrost SOC stocks, including quantitative uncertainty estimates, in the 0–3 m depth range in soils as well as for sediments deeper than 3 m in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. Revised estimates are based on significantly larger databases compared tomore » previous studies. Despite this there is evidence of significant remaining regional data gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for deposits below 3 m depth in deltas and the Yedoma region. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are substantial differences in other components, including the fraction of perennially frozen SOC. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 ± 12 and 472 ± 27 Pg for the 0–0.3 and 0–1 m soil depths, respectively (±95% confidence intervals). Storage of SOC in 0–3 m of soils is estimated to 1035 ± 150 Pg. Of this, 34 ± 16 Pg C is stored in poorly developed soils of the High Arctic. Based on generalized calculations, storage of SOC below 3 m of surface soils in deltaic alluvium of major Arctic rivers is estimated as 91 ± 52 Pg. In the Yedoma region, estimated SOC stocks below 3 m depth are 181 ± 54 Pg, of which 74 ± 20 Pg is stored in intact Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits. Total estimated SOC storage for the permafrost region is ∼1300 Pg with an uncertainty range of ∼1100 to 1500 Pg. Of this, ∼500 Pg is in non-permafrost soils, seasonally thawed in the active layer or in deeper taliks, while ∼800 Pg is perennially frozen. In conclusion, this represents a substantial ∼300 Pg lowering of the estimated perennially frozen SOC stock compared to previous estimates.« less

  20. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  1. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  2. EDIN0613P weight estimating program. [for launch vehicles

    NASA Technical Reports Server (NTRS)

    Hirsch, G. N.

    1976-01-01

    The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.

  3. Impaired early visual response modulations to spatial information in chronic schizophrenia

    PubMed Central

    Knebel, Jean-François; Javitt, Daniel C.; Murray, Micah M.

    2011-01-01

    Early visual processing stages have been demonstrated to be impaired in schizophrenia patients and their first-degree relatives. The amplitude and topography of the P1 component of the visual evoked potential (VEP) are both affected; the latter of which indicates alterations in active brain networks between populations. At least two issues remain unresolved. First, the specificity of this deficit (and suitability as an endophenotype) has yet to be established, with evidence for impaired P1 responses in other clinical populations. Second, it remains unknown whether schizophrenia patients exhibit intact functional modulation of the P1 VEP component; an aspect that may assist in distinguishing effects specific to schizophrenia. We applied electrical neuroimaging analyses to VEPs from chronic schizophrenia patients and healthy controls in response to variation in the parafoveal spatial extent of stimuli. Healthy controls demonstrated robust modulation of the VEP strength and topography as a function of the spatial extent of stimuli during the P1 component. By contrast, no such modulations were evident at early latencies in the responses from patients with schizophrenia. Source estimations localized these deficits to the left precuneus and medial inferior parietal cortex. These findings provide insights on potential underlying low-level impairments in schizophrenia. PMID:21764264

  4. Acute and chronic environmental effects of clandestine methamphetamine waste.

    PubMed

    Kates, Lisa N; Knapp, Charles W; Keenan, Helen E

    2014-09-15

    The illicit manufacture of methamphetamine (MAP) produces substantial amounts of hazardous waste that is dumped illegally. This study presents the first environmental evaluation of waste produced from illicit MAP manufacture. Chemical oxygen demand (COD) was measured to assess immediate oxygen depletion effects. A mixture of five waste components (10mg/L/chemical) was found to have a COD (130 mg/L) higher than the European Union wastewater discharge regulations (125 mg/L). Two environmental partition coefficients, K(OW) and K(OC), were measured for several chemicals identified in MAP waste. Experimental values were input into a computer fugacity model (EPI Suite™) to estimate environmental fate. Experimental log K(OW) values ranged from -0.98 to 4.91, which were in accordance with computer estimated values. Experimental K(OC) values ranged from 11 to 72, which were much lower than the default computer values. The experimental fugacity model for discharge to water estimates that waste components will remain in the water compartment for 15 to 37 days. Using a combination of laboratory experimentation and computer modelling, the environmental fate of MAP waste products was estimated. While fugacity models using experimental and computational values were very similar, default computer models should not take the place of laboratory experimentation. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Real-Time Prognostics of a Rotary Valve Actuator

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew

    2015-01-01

    Valves are used in many domains and often have system-critical functions. As such, it is important to monitor the health of valves and their actuators and predict remaining useful life. In this work, we develop a model-based prognostics approach for a rotary valve actuator. Due to limited observability of the component with multiple failure modes, a lumped damage approach is proposed for estimation and prediction of damage progression. In order to support the goal of real-time prognostics, an approach to prediction is developed that does not require online simulation to compute remaining life, rather, a function mapping the damage state to remaining useful life is found offline so that predictions can be made quickly online with a single function evaluation. Simulation results demonstrate the overall methodology, validating the lumped damage approach and demonstrating real-time prognostics.

  6. Partitioning evapotranspiration using long-term carbon dioxide and water vapor fluxes

    NASA Astrophysics Data System (ADS)

    Scott, Russell L.; Biederman, Joel A.

    2017-07-01

    The separate components of evapotranspiration (ET) elucidate the pathways and time scales over which water is returned to the atmosphere, but ecosystem-scale measurements of transpiration (T) and evaporation (E) remain elusive. We propose a novel determination of E and T using multiyear eddy covariance estimates of ET and gross ecosystem photosynthesis (GEP). The method is applicable at water-limited sites over time periods during which a linear regression between GEP (abscissa) and ET (ordinate) yields a positive ET axis intercept, an estimate of E. At four summer-rainfall semiarid sites, T/ET increases to a peak coincident with maximum GEP and remains elevated as the growing season progresses, consistent with previous, direct measurements. The seasonal course of T/ET is related to increasing leaf area index and declining frequency of rainy days—an index of the wet surface conditions that promote E—suggesting both surface and climatic controls on ET partitioning.

  7. Effects of irrigation and addition of nitrogen fertiliser on net ecosystem carbon balance for a grassland.

    PubMed

    Moinet, Gabriel Y K; Cieraad, Ellen; Turnbull, Matthew H; Whitehead, David

    2017-02-01

    The ability to quantify the impacts of changing management practices on the components of net ecosystem carbon balance (N B ) is required to forecast future changes in soil carbon stocks and potential feedbacks on atmospheric CO 2 concentrations. In this study we investigated seasonal changes on the components of net ecosystem carbon balance resulting from the application of irrigation and nitrogen fertiliser to a temperate grassland in New Zealand where we simulated grazing events. We made seasonal measurements of the components of N B using chamber measurements in field plots with and without irrigation and addition of nitrogen fertiliser. We developed models to determine the physiological responses of gross canopy photosynthesis (A), leaf respiration (R L ) and soil respiration (R S ) to soil and air temperature, soil water content and irradiance and we estimated annual N B for the first year after treatments were applied. Overall, irrigation and nitrogen addition had a synergistic effect to increase annual estimates of above-ground components of carbon balance (A, R L and carbon exported through simulated grazing, F export ), but there was no effect from adding nitrogen alone. Annual R S remained unchanged between treatments. The treatments resulted in increases in above-ground biomass production, but, with the high intensity of simulated grazing, these were not sufficient to offset ecosystem carbon losses, so all treatments remained a net source of carbon. There were no significant differences between treatments and annual N B ranged from -540gCm -2 y -1 for the treatment with no irrigation and no nitrogen addition and -284gCm -2 y -1 for the treatment with irrigation and nitrogen addition. Our findings from the first year of the treatments quantify the net benefits of addition of irrigation and nitrogen on increasing above-ground production for animal feed but show that this did not lead to a net increase carbon input to the soil. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Flexing and downsizing the femoral component is not detrimental to patellofemoral biomechanics in posterior-referencing cruciate-retaining total knee arthroplasty.

    PubMed

    Marra, Marco A; Strzelczak, Marta; Heesterbeek, Petra J C; van de Groes, Sebastiaan A W; Janssen, Dennis; Koopman, Bart F J M; Verdonschot, Nico; Wymenga, Ate B

    2018-03-20

    When downsizing the femoral component to prevent mediolateral overhang, notching of the anterior femoral cortex may occur, which could be solved by flexing the femoral component. In this study, we investigated the effect of flexion of the femoral component on patellar tendon moment arm, patellofemoral forces and kinematics in posterior-referencing CR-TKA. Our hypothesis was that flexion of the femoral component increases the patellar tendon moment arm, reduces the patellofemoral forces and provides stable kinematics. A validated musculoskeletal model of CR-TKA was used. The flexion of the femoral component was increased in four steps (0°, 3°, 6°, 9°) using posterior referencing, and different alignments were analysed in combination with three implant sizes (3, 4, 5). A chair-rising trial was analysed using the model, while simultaneously estimating quadriceps muscle force, patellofemoral contact force, tibiofemoral and patellofemoral kinematics. Compared to the reference case (size 4 and 0° flexion), for every 3° of increase in flexion of the femoral component the patellar tendon moment arm increased by 1% at knee extension. The peak quadriceps muscle force and patellofemoral contact force decreased by 2%, the patella shifted 0.8 mm more anteriorly and the remaining kinematics remained stable, with knee flexion. With the smaller size, the patellar tendon moment arm decreased by 6%, the quadriceps muscle force and patellofemoral contact force increased by 8 and 12%, and the patellar shifted 5 mm more posteriorly. Opposite trends were found with the bigger size. Flexing the femoral component with posterior referencing reduced the patellofemoral contact forces during a simulated chair-rising trial with a patient-specific musculoskeletal model of CR-TKA. There seems to be little risk when flexing and downsizing the femoral component, compared to when using a bigger size and neutral alignment. These findings provide relevant information to surgeons who wish to prevent anterior notching when downsizing the femoral component.

  9. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults

    PubMed Central

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K.; Thomas, Venetta; Ambrosone, Christine B.; Bandera, Elisa V.; Berndt, Sonja I.; Bernstein, Leslie; Blot, William J.; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J.; Cheng, Iona; Chu, Lisa; Deming, Sandra L.; Driver, W. Ryan; Goodman, Phyllis; Hayes, Richard B.; Hennis, Anselm J. M.; Hsing, Ann W.; Hu, Jennifer J.; Ingles, Sue A.; John, Esther M.; Kittles, Rick A.; Kolb, Suzanne; Leske, M. Cristina; Monroe, Kristine R.; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F.; Rodriguez-Gil, Jorge L.; Rybicki, Ben A.; Schumacher, Fredrick; Stanford, Janet L.; Signorello, Lisa B.; Strom, Sara S.; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S.; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G.; Stram, Alexander H.; Kolonel, Laurence N.; Marchand, Loïc Le; Henderson, Brian E.; Haiman, Christopher A.; Stram, Daniel O.

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious. PMID:26125186

  10. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  11. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    PubMed

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K; Thomas, Venetta; Ambrosone, Christine B; Bandera, Elisa V; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J; Cheng, Iona; Chu, Lisa; Deming, Sandra L; Driver, W Ryan; Goodman, Phyllis; Hayes, Richard B; Hennis, Anselm J M; Hsing, Ann W; Hu, Jennifer J; Ingles, Sue A; John, Esther M; Kittles, Rick A; Kolb, Suzanne; Leske, M Cristina; Millikan, Robert C; Monroe, Kristine R; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F; Rodriguez-Gil, Jorge L; Rybicki, Ben A; Schumacher, Fredrick; Stanford, Janet L; Signorello, Lisa B; Strom, Sara S; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G; Stram, Alexander H; Kolonel, Laurence N; Le Marchand, Loïc; Henderson, Brian E; Haiman, Christopher A; Stram, Daniel O

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious.

  12. Accuracy of Snow Water Equivalent Estimated From GPS Vertical Displacements: A Synthetic Loading Case Study for Western U.S. Mountains

    NASA Astrophysics Data System (ADS)

    Enzminger, Thomas L.; Small, Eric E.; Borsa, Adrian A.

    2018-01-01

    GPS monitoring of solid Earth deformation due to surface loading is an independent approach for estimating seasonal changes in terrestrial water storage (TWS). In western United States (WUSA) mountain ranges, snow water equivalent (SWE) is the dominant component of TWS and an essential water resource. While several studies have estimated SWE from GPS-measured vertical displacements, the error associated with this method remains poorly constrained. We examine the accuracy of SWE estimated from synthetic displacements at 1,395 continuous GPS station locations in the WUSA. Displacement at each station is calculated from the predicted elastic response to variations in SWE from SNODAS and soil moisture from the NLDAS-2 Noah model. We invert synthetic displacements for TWS, showing that both seasonal accumulation and melt as well as year-to-year fluctuations in peak SWE can be estimated from data recorded by the existing GPS network. Because we impose a smoothness constraint in the inversion, recovered TWS exhibits mass leakage from mountain ranges to surrounding areas. This leakage bias is removed via linear rescaling in which the magnitude of the gain factor depends on station distribution and TWS anomaly patterns. The synthetic GPS-derived estimates reproduce approximately half of the spatial variability (unbiased root mean square error ˜50%) of TWS loading within mountain ranges, a considerable improvement over GRACE. The inclusion of additional simulated GPS stations improves representation of spatial variations. GPS data can be used to estimate mountain-range-scale SWE, but effects of soil moisture and other TWS components must first be subtracted from the GPS-derived load estimates.

  13. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  14. Deformable known component model-based reconstruction for coronary CT angiography

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.

    2017-03-01

    Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.

  15. Topography: dusting for the fingerprints of mantle dynamics

    NASA Astrophysics Data System (ADS)

    Faccenna, C.; Becker, T. W.

    2016-12-01

    The surface of the Earth is an ever-changing expression of the dynamic processes occurring deep in the mantle and at and above its surface, but our ability to "read" landscapes in terms of their underlying tectonic or climatic forcing is rudimentary. During the last decade, particular attention has been drawn to the deep, convection-related component of topography, induced by the stress produced at the base of the lithosphere by mantle flow, and its relevance compared to the (iso)static component. Despite much progress, several issues, including the magnitude and rate of this dynamic component, remain open. Here, we use key sites from convergent margins (e.g., the Apennines) and from intraplate settings (e.g., Ethiopia) to estimate the amplitude and rate of topography change and to disentangle the dynamic from the static component. On the base of those and other examples, we introduce the concept of a Topographic Fingerprint: any combination of mantle, crustal and surface processes that will result in a distinctive, thus predictable, topographic expression.

  16. Prognostics

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Vachtsevanos, George; Orchard, Marcos E.

    2013-01-01

    Knowledge discovery, statistical learning, and more specifically an understanding of the system evolution in time when it undergoes undesirable fault conditions, are critical for an adequate implementation of successful prognostic systems. Prognosis may be understood as the generation of long-term predictions describing the evolution in time of a particular signal of interest or fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem. Predictions are made using a thorough understanding of the underlying processes and factor in the anticipated future usage.

  17. Historic Resources Assessment, Tennessee-Tombigbee Waterway Wildlife Mitigation Project, Mobile and Tensaw River Deltas, Alabama

    DTIC Science & Technology

    1993-03-01

    Bayou Sara. It is unclear whether the midden observed is a remnant patch or the result of a limited stay where aborigines deposited the remains of a few...short, single component encampment or a remnant midden patch . It is estimated that 2 person days (16 hours) should be sufficient to make the...staining and calculus on the buccal surface]. These could be from one individual, as there is no duplication of elements and all of the bones have an

  18. Incident diagnoses of cancers and cancer-related deaths, active component, U.S. Armed Forces, 2000-2011.

    PubMed

    2012-06-01

    In the United States, cancer is one of the five leading causes of death in all age groups among both men and women; overall, approximately one in four deaths is attributable to cancer. Compared to the general U.S. population, military members have been estimated to have lower incidence rates of several cancers including colorectal, lung, and cervical cancers and higher rates of prostate, breast, and thyroid cancer. Between 2000 and 2011 in active component members of the U.S. military, crude incidence rates of most cancer diagnoses have remained stable. 9,368 active component service members were diagnosed with one of the cancers of interest and no specific increasing or decreasing trends were observed. Cancer is an uncommon cause of death among service members on active duty and accounted for a total of 1,185 deaths during the 12-year surveillance period.

  19. Batch settling curve registration via image data modeling.

    PubMed

    Derlon, Nicolas; Thürlimann, Christian; Dürrenmatt, David; Villez, Kris

    2017-05-01

    To this day, obtaining reliable characterization of sludge settling properties remains a challenging and time-consuming task. Without such assessments however, optimal design and operation of secondary settling tanks is challenging and conservative approaches will remain necessary. With this study, we show that automated sludge blanket height registration and zone settling velocity estimation is possible thanks to analysis of images taken during batch settling experiments. The experimental setup is particularly interesting for practical applications as it consists of off-the-shelf components only, no moving parts are required, and the software is released publicly. Furthermore, the proposed multivariate shape constrained spline model for image analysis appears to be a promising method for reliable sludge blanket height profile registration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Heritability estimates of the Big Five personality traits based on common genetic variants.

    PubMed

    Power, R A; Pluess, M

    2015-07-14

    According to twin studies, the Big Five personality traits have substantial heritable components explaining 40-60% of the variance, but identification of associated genetic variants has remained elusive. Consequently, knowledge regarding the molecular genetic architecture of personality and to what extent it is shared across the different personality traits is limited. Using genomic-relatedness-matrix residual maximum likelihood analysis (GREML), we here estimated the heritability of the Big Five personality factors (extraversion, agreeableness, conscientiousness, neuroticism and openness for experience) in a sample of 5011 European adults from 527,469 single-nucleotide polymorphisms across the genome. We tested for the heritability of each personality trait, as well as for the genetic overlap between the personality factors. We found significant and substantial heritability estimates for neuroticism (15%, s.e. = 0.08, P = 0.04) and openness (21%, s.e. = 0.08, P < 0.01), but not for extraversion, agreeableness and conscientiousness. The bivariate analyses showed that the variance explained by common variants entirely overlapped between neuroticism and openness (rG = 1.00, P < 0.001), despite low phenotypic correlation (r = - 0.09, P < 0.001), suggesting that the remaining unique heritability may be determined by rare or structural variants. As far as we are aware of, this is the first study estimating the shared and unique heritability of all Big Five personality traits using the GREML approach. Findings should be considered exploratory and suggest that detectable heritability estimates based on common variants is shared between neuroticism and openness to experiences.

  1. An assessment of the tracer-based approach to quantifying groundwater contributions to streamflow

    NASA Astrophysics Data System (ADS)

    Jones, J. P.; Sudicky, E. A.; Brookfield, A. E.; Park, Y.-J.

    2006-02-01

    The use of conservative geochemical and isotopic tracers along with mass balance equations to determine the pre-event groundwater contributions to streamflow during a rainfall event is widely used for hydrograph separation; however, aspects related to the influence of surface and subsurface mixing processes on the estimates of the pre-event contribution remain poorly understood. Moreover, the lack of a precise definition of "pre-event" versus "event" contributions on the one hand and "old" versus "new" water components on the other hand has seemingly led to confusion within the hydrologic community about the role of Darcian-based groundwater flow during a storm event. In this work, a fully integrated surface and subsurface flow and solute transport model is used to analyze flow system dynamics during a storm event, concomitantly with advective-dispersive tracer transport, and to investigate the role of hydrodynamic mixing processes on the estimates of the pre-event component. A number of numerical experiments are presented, including an analysis of a controlled rainfall-runoff experiment, that compare the computed Darcian-based groundwater fluxes contributing to streamflow during a rainfall event with estimates of these contributions based on a tracer-based separation. It is shown that hydrodynamic mixing processes can dramatically influence estimates of the pre-event water contribution estimated by a tracer-based separation. Specifically, it is demonstrated that the actual amount of bulk flowing groundwater contributing to streamflow may be much smaller than the quantity indirectly estimated from a separation based on tracer mass balances, even if the mixing processes are weak.

  2. Serious Hazards of Transfusion (SHOT) haemovigilance and progress is improving transfusion safety

    PubMed Central

    Bolton-Maggs, Paula H B; Cohen, Hannah

    2013-01-01

    Summary The Serious Hazards of Transfusion (SHOT) UK confidential haemovigilance reporting scheme began in 1996. Over the 16 years of reporting, the evidence gathered has prompted changes in transfusion practice from the selection and management of donors to changes in hospital practice, particularly better education and training. However, half or more reports relate to errors in the transfusion process despite the introduction of several measures to improve practice. Transfusion in the UK is very safe: 2·9 million components were issued in 2012, and very few deaths are related to transfusion. The risk of death from transfusion as estimated from SHOT data in 2012 is 1 in 322 580 components issued and for major morbidity, 1 in 21 413 components issued; the risk of transfusion-transmitted infection is much lower. Acute transfusion reactions and transfusion-associated circulatory overload carry the highest risk for morbidity and death. The high rate of participation in SHOT by National Health Service organizations, 99·5%, is encouraging. Despite the very useful information gained about transfusion reactions, the main risks remain human factors. The recommendations on reduction of errors through a ‘back to basics’ approach from the first annual SHOT report remain absolutely relevant today. PMID:24032719

  3. Detecting anthropogenic footprints in sea level rise: the role of complex colored noise

    NASA Astrophysics Data System (ADS)

    Dangendorf, Sönke; Marcos, Marta; Müller, Alfred; Zorita, Eduardo; Jensen, Jürgen

    2015-04-01

    While there is scientific consensus that global mean sea level (MSL) is rising since the late 19th century, it remains unclear how much of this rise is due to natural variability or anthropogenic forcing. Uncovering the anthropogenic contribution requires profound knowledge about the persistence of natural MSL variations. This is challenging, since observational time series represent the superposition of various processes with different spectral properties. Here we statistically estimate the upper bounds of naturally forced centennial MSL trends on the basis of two separate components: a slowly varying volumetric (mass and density changes) and a more rapidly changing atmospheric component. Resting on a combination of spectral analyses of tide gauge records, ocean reanalysis data and numerical Monte-Carlo experiments, we find that in records where transient atmospheric processes dominate, the persistence of natural volumetric changes is underestimated. If each component is assessed separately, natural centennial trends are locally up to ~0.5 mm/yr larger than in case of an integrated assessment. This implies that external trends in MSL rise related to anthropogenic forcing might be generally overestimated. By applying our approach to the outputs of a centennial ocean reanalysis (SODA), we estimate maximum natural trends in the order of 1 mm/yr for the global average. This value is larger than previous estimates, but consistent with recent paleo evidence from periods in which the anthropogenic contribution was absent. Comparing our estimate to the observed 20th century MSL rise of 1.7 mm/yr suggests a minimum external contribution of at least 0.7 mm/yr. We conclude that an accurate detection of anthropogenic footprints in MSL rise requires a more careful assessment of the persistence of intrinsic natural variability.

  4. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  5. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  6. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  7. Advanced Respiratory Motion Compensation for Coronary MR Angiography

    PubMed Central

    Henningsson, Markus; Botnar, Rene M.

    2013-01-01

    Despite technical advances, respiratory motion remains a major impediment in a substantial amount of patients undergoing coronary magnetic resonance angiography (CMRA). Traditionally, respiratory motion compensation has been performed with a one-dimensional respiratory navigator positioned on the right hemi-diaphragm, using a motion model to estimate and correct for the bulk respiratory motion of the heart. Recent technical advancements has allowed for direct respiratory motion estimation of the heart, with improved motion compensation performance. Some of these new methods, particularly using image-based navigators or respiratory binning, allow for more advanced motion correction which enables CMRA data acquisition throughout most or all of the respiratory cycle, thereby significantly reducing scan time. This review describes the three components typically involved in most motion compensation strategies for CMRA, including respiratory motion estimation, gating and correction, and how these processes can be utilized to perform advanced respiratory motion compensation. PMID:23708271

  8. Flaw characterization through nonlinear ultrasonics and wavelet cross-correlation algorithms

    NASA Astrophysics Data System (ADS)

    Bunget, Gheorghe; Yee, Andrew; Stewart, Dylan; Rogers, James; Henley, Stanley; Bugg, Chris; Cline, John; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz

    2018-04-01

    Ultrasonic measurements have become increasingly important non-destructive techniques to characterize flaws found within various in-service industrial components. The prediction of remaining useful life based on fracture analysis depends on the accurate estimation of flaw size and orientation. However, amplitude-based ultrasonic measurements are not able to estimate the plastic zones that exist ahead of crack tips. Estimating the size of the plastic zone is an advantage since some flaws may propagate faster than others. This paper presents a wavelet cross-correlation (WCC) algorithm that was applied to nonlinear analysis of ultrasonically guided waves (GW). By using this algorithm, harmonics present in the waveforms were extracted and nonlinearity parameters were used to indicate both the tip of the cracks and size of the plastic zone. B-scans performed with the quadratic nonlinearities were sensitive to micro-damage specific to plastic zones.

  9. A new data assimilation engine for physics-based thermospheric density models

    NASA Astrophysics Data System (ADS)

    Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.

    2017-12-01

    The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.

  10. Multi-temporal Linkages of Net Ecosystem Exchanges (NEE) with the Climatic and Ecohydrologic Drivers in a Florida Everglades Short-hydroperiod Freshwater Marsh

    NASA Astrophysics Data System (ADS)

    Zaki, M. T.; Abdul-Aziz, O. I.; Ishtiaq, K. S.

    2017-12-01

    Wetlands are considered one of the most productive and ecologically valuable ecosystems on earth. We investigated the multi-temporal linkages of net ecosystem exchange (NEE) with the relevant climatic and ecohydrological drivers for a Florida Everglades short-hydroperiod freshwater wetland. Hourly NEE observations and the associated driving variables during 2008-12 were collected from the AmeriFlux and EDEN databases, and then averaged for the four temporal scales (1-day, 8-day, 15-day, and 30-day). Pearson correlation and factor analysis were employed to identify the interrelations and grouping patterns among the participatory variables for each time scale. The climatic and ecohydrological linkages of NEE were then reliably estimated using bootstrapped (1000 iterations) partial least squares regressions by resolving multicollinearity. The analytics identified four bio-physical components exhibiting relatively robust interrelations and grouping patterns with NEE across the temporal scales. In general, NEE was most strongly linked with the `radiation-energy (RE)' component, while having a moderate linkage with the `temperature-hydrology (TH)' and `aerodynamic (AD)' components. However, the `ambient atmospheric CO2 (AC)' component was very weakly linked to NEE. Further, RE and TH had a decreasing trend with the increasing time scales (1-30 days). In contrast, the linkages of AD and AC components increased from 1-day to 8-day scales, and then remained relatively invariable at the longer scales of aggregation. The estimated linkages provide insights into the dominant biophysical process components and drivers of ecosystem carbon in the Everglades. The invariant linking pattern and linkages would help to develop low-dimensional models to reliably predict CO2 fluxes from the tidal freshwater wetlands.

  11. Global and system-specific resting-state fMRI fluctuations are uncorrelated: principal component analysis reveals anti-correlated networks.

    PubMed

    Carbonell, Felix; Bellec, Pierre; Shmuel, Amir

    2011-01-01

    The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations.

  12. Multi-allelic haplotype model based on genetic partition for genomic prediction and variance component estimation using SNP markers.

    PubMed

    Da, Yang

    2015-12-18

    The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.

  13. Geostatistical estimation of signal-to-noise ratios for spectral vegetation indices

    USGS Publications Warehouse

    Ji, Lei; Zhang, Li; Rover, Jennifer R.; Wylie, Bruce K.; Chen, Xuexia

    2014-01-01

    In the past 40 years, many spectral vegetation indices have been developed to quantify vegetation biophysical parameters. An ideal vegetation index should contain the maximum level of signal related to specific biophysical characteristics and the minimum level of noise such as background soil influences and atmospheric effects. However, accurate quantification of signal and noise in a vegetation index remains a challenge, because it requires a large number of field measurements or laboratory experiments. In this study, we applied a geostatistical method to estimate signal-to-noise ratio (S/N) for spectral vegetation indices. Based on the sample semivariogram of vegetation index images, we used the standardized noise to quantify the noise component of vegetation indices. In a case study in the grasslands and shrublands of the western United States, we demonstrated the geostatistical method for evaluating S/N for a series of soil-adjusted vegetation indices derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. The soil-adjusted vegetation indices were found to have higher S/N values than the traditional normalized difference vegetation index (NDVI) and simple ratio (SR) in the sparsely vegetated areas. This study shows that the proposed geostatistical analysis can constitute an efficient technique for estimating signal and noise components in vegetation indices.

  14. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Faultmore » Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.« less

  15. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  16. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  17. Integrating Systems Health Management with Adaptive Controls for a Utility-Scale Wind Turbine

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Goebel, Kai; Trinh, Khanh V.; Balas, Mark J.; Frost, Alan M.

    2011-01-01

    Increasing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. Systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage. Advanced adaptive controls can provide the mechanism to enable optimized operations that also provide the enabling technology for Systems Health Management goals. The work reported herein explores the integration of condition monitoring of wind turbine blades with contingency management and adaptive controls. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.

  18. Estimation of Mass of Compact Object in H 1743-322 from 2010 and 2011 Outbursts using TCAF Solution and Spectral Index-QPO Frequency Correlation

    NASA Astrophysics Data System (ADS)

    Molla, Aslam Ali; Chakrabarti, Sandip K.; Debnath, Dipak; Mondal, Santanu

    2017-01-01

    The well-known black hole candidate (BHC) H 1743-322 exhibited temporal and spectral variabilities during several outbursts. The variation of the accretion rates and flow geometry that change on a daily basis during each of the outbursts can be very well understood using the recent implementation of the two-component advective flow solution of the viscous transonic flow equations as an additive table model in XSPEC. This has dramatically improved our understanding of accretion flow dynamics. Most interestingly, the solution allows us to treat the mass of the BHC as a free parameter and its mass could be estimated from spectral fits. In this paper, we fitted the data of two successive outbursts of H 1743-322 in 2010 and 2011 and studied the evolution of accretion flow parameters, such as two-component (Keplerian and sub-Keplerian) accretion rates, shock location (I.e., size of the Compton cloud), etc. We assume that the model normalization remains the same across the states in both these outbursts. We used this to estimate the mass of the black hole and found that it comes out in the range of 9.25{--}12.86 {M}⊙ . For the sake of comparison, we also estimated mass using the Photon index versus Quasi Periodic Oscillation frequency correlation method, which turns out to be 11.65+/- 0.67 {M}⊙ using GRO J1655-40 as a reference source. Combining these two estimates, the most probable mass of the compact object becomes {11.21}-1.96+1.65 {M}⊙ .

  19. Estimating under-five mortality in space and time in a developing world context.

    PubMed

    Wakefield, Jon; Fuglstad, Geir-Arne; Riebler, Andrea; Godwin, Jessica; Wilson, Katie; Clark, Samuel J

    2018-01-01

    Accurate estimates of the under-five mortality rate in a developing world context are a key barometer of the health of a nation. This paper describes a new model to analyze survey data on mortality in this context. We are interested in both spatial and temporal description, that is wishing to estimate under-five mortality rate across regions and years and to investigate the association between the under-five mortality rate and spatially varying covariate surfaces. We illustrate the methodology by producing yearly estimates for subnational areas in Kenya over the period 1980-2014 using data from the Demographic and Health Surveys, which use stratified cluster sampling. We use a binomial likelihood with fixed effects for the urban/rural strata and random effects for the clustering to account for the complex survey design. Smoothing is carried out using Bayesian hierarchical models with continuous spatial and temporally discrete components. A key component of the model is an offset to adjust for bias due to the effects of HIV epidemics. Substantively, there has been a sharp decline in Kenya in the under-five mortality rate in the period 1980-2014, but large variability in estimated subnational rates remains. A priority for future research is understanding this variability. In exploratory work, we examine whether a variety of spatial covariate surfaces can explain the variability in under-five mortality rate. Temperature, precipitation, a measure of malaria infection prevalence, and a measure of nearness to cities were candidates for inclusion in the covariate model, but the interplay between space, time, and covariates is complex.

  20. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  1. Estimation of the interference coupling into cables within electrically large multiroom structures

    NASA Astrophysics Data System (ADS)

    Keghie, J.; Kanyou Nana, R.; Schetelig, B.; Potthast, S.; Dickmann, S.

    2010-10-01

    Communication cables are used to transfer data between components of a system. As a part of the EMC analysis of complex systems, it is necessary to determine which level of interference can be expected at the input of connected devices due to the coupling into the irradiated cable. For electrically large systems consisting of several rooms with cables connecting components located in different rooms, an estimation of the coupled disturbances inside cables using commercial field computation software is often not feasible without several restrictions. In many cases, this is related to the non-availability of computing memory and processing power needed for the computation. In this paper, we are going to show that, starting from a topological analysis of the entire system, weak coupling paths within the system can be can be identified. By neglecting these coupling paths and using the transmission line approach, the original system will be simplified so that a simpler estimation is possible. Using the example of a system which is composed of two rooms, multiple apertures, and a network cable located in both chambers, it is shown that an estimation of the coupled disturbances due to external electromagnetic sources is feasible with this approach. Starting from an incident electromagnetic field, we determine transfer functions describing the coupling means (apertures, cables). Using these transfer functions and the knowledge of the weak coupling paths above, a decision is taken regarding the means for paths that can be neglected during the estimation. The estimation of the coupling into the cable is then made while taking only paths with strong coupling into account. The remaining part of the wiring harness in areas with weak coupling is represented by its input impedance. A comparison with the original network shows a good agreement.

  2. Development of a mechatronic platform and validation of methods for estimating ankle stiffness during the stance phase of walking.

    PubMed

    Rouse, Elliott J; Hargrove, Levi J; Perreault, Eric J; Peshkin, Michael A; Kuiken, Todd A

    2013-08-01

    The mechanical properties of human joints (i.e., impedance) are constantly modulated to precisely govern human interaction with the environment. The estimation of these properties requires the displacement of the joint from its intended motion and a subsequent analysis to determine the relationship between the imposed perturbation and the resultant joint torque. There has been much investigation into the estimation of upper-extremity joint impedance during dynamic activities, yet the estimation of ankle impedance during walking has remained a challenge. This estimation is important for understanding how the mechanical properties of the human ankle are modulated during locomotion, and how those properties can be replicated in artificial prostheses designed to restore natural movement control. Here, we introduce a mechatronic platform designed to address the challenge of estimating the stiffness component of ankle impedance during walking, where stiffness denotes the static component of impedance. The system consists of a single degree of freedom mechatronic platform that is capable of perturbing the ankle during the stance phase of walking and measuring the response torque. Additionally, we estimate the platform's intrinsic inertial impedance using parallel linear filters and present a set of methods for estimating the impedance of the ankle from walking data. The methods were validated by comparing the experimentally determined estimates for the stiffness of a prosthetic foot to those measured from an independent testing machine. The parallel filters accurately estimated the mechatronic platform's inertial impedance, accounting for 96% of the variance, when averaged across channels and trials. Furthermore, our measurement system was found to yield reliable estimates of stiffness, which had an average error of only 5.4% (standard deviation: 0.7%) when measured at three time points within the stance phase of locomotion, and compared to the independently determined stiffness values of the prosthetic foot. The mechatronic system and methods proposed in this study are capable of accurately estimating ankle stiffness during the foot-flat region of stance phase. Future work will focus on the implementation of this validated system in estimating human ankle impedance during the stance phase of walking.

  3. Scattering volume in the collective Thomson scattering measurement using high power gyrotron in the LHD

    NASA Astrophysics Data System (ADS)

    Kubo, S.; Nishiura, M.; Tanaka, K.; Moseev, D.; Ogasawara, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Tsujimura, T. I.; Makino, R.

    2016-06-01

    High-power gyrotrons prepared for the electron cyclotron heating at 77 GHz has been used for a collective Thomson scattering (CTS) study in LHD. Due to the difficulty in removing fundamental and/or second harmonic resonance in the viewing line of sight, the subtraction of the background ECE from measured signal was performed by modulating the probe beam power from a gyrotron. The separation of the scattering component from the background has been performed successfully taking into account the response time difference between both high-energy and bulk components. The other separation was attempted by fast scanning the viewing beam across the probing beam. It is found that the intensity of the scattered spectrum corresponding to the bulk and high energy components were almost proportional to the calculated scattering volume in the relatively low density region, while appreciable background scattered component remains even in the off volume in some high density cases. The ray-trace code TRAVIS is used to estimate the change in the scattering volume due to probing and receiving beam deflection effect.

  4. Electric prototype power processor for a 30cm ion thruster

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    An electrical prototype power processor unit was designed, fabricated and tested with a 30 cm mercury ion engine for primary space propulsion. The power processor unit used the thyristor series resonant inverter as the basic power stage for the high power beam and discharge supplies. A transistorized series resonant inverter processed the remaining power for the low power outputs. The power processor included a digital interface unit to process all input commands and internal telemetry signals so that electric propulsion systems could be operated with a central computer system. The electrical prototype unit included design improvement in the power components such as thyristors, transistors, filters and resonant capacitors, and power transformers and inductors in order to reduce component weight, to minimize losses, and to control the component temperature rise. A design analysis for the electrical prototype is also presented on the component weight, losses, part count and reliability estimate. The electrical prototype was tested in a thermal vacuum environment. Integration tests were performed with a 30 cm ion engine and demonstrated operational compatibility. Electromagnetic interference data was also recorded on the design to provide information for spacecraft integration.

  5. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    PubMed

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  6. Neutron Radiation Damage Estimation in the Core Structure Base Metal of RSG GAS

    NASA Astrophysics Data System (ADS)

    Santa, S. A.; Suwoto

    2018-02-01

    Radiation damage in core structure of the Indonesian RGS GAS multi purpose reactor resulting from the reaction of fast and thermal neutrons with core material structure was investigated for the first time after almost 30 years in operation. The aim is to analyze the degradation level of the critical components of the RSG GAS reactor so that the remaining life of its component can be estimated. Evaluation results of critical components remaining life will be used as data ccompleteness for submission of reactor operating permit extension. Material damage analysis due to neutron radiation is performed for the core structure components made of AlMg3 material and bolts reinforcement of core structure made of SUS304. Material damage evaluation was done on Al and Fe as base metal of AlMg3 and SUS304, respectively. Neutron fluences are evaluated based on the assumption that neutron flux calculations of U3Si8-Al equilibrium core which is operated on power rated of 15 MW. Calculation result using SRAC2006 code of CITATION module shows the maximum total neutron flux and flux >0.1 MeV are 2.537E+14 n/cm2/s and 3.376E+13 n/cm2/s, respectively. It was located at CIP core center close to the fuel element. After operating up to the end of #89 core formation, the total neutron fluence and fluence >0.1 MeV were achieved 9.063E+22 and 1.269E+22 n/cm2, respectively. Those are related to material damage of Al and Fe as much as 17.91 and 10.06 dpa, respectively. Referring to the life time of Al-1100 material irradiated in the neutron field with thermal flux/total flux=1.7 which capable of accepting material damage up to 250 dpa, it was concluded that RSG GAS reactor core structure underwent 7.16% of its operating life span. It means that core structure of RSG GAS reactor is still capable to receive the total neutron fluence of 9.637E+22 n/cm2 or fluence >0.1 MeV of 5.672E+22 n/cm2.

  7. Simulating forest productivity along a neotropical elevational transect: temperature variation and carbon use efficiency

    NASA Astrophysics Data System (ADS)

    Marthews, T.; Malhi, Y.; Girardin, C.; Silva-Espejo, J.; Aragão, L.; Metcalfe, D.; Rapp, J.; Mercado, L.; Fisher, R.; Galbraith, D.; Fisher, J.; Salinas-Revilla, N.; Friend, A.; Restrepo-Coupe, N.; Williams, R.

    2012-04-01

    A better understanding of the mechanisms controlling the magnitude and sign of carbon components in tropical forest ecosystems is important for reliable estimation of this important regional component of the global carbon cycle. We used the JULES vegetation model to simulate all components of the carbon balance at six sites along an Andes-Amazon transect across Peru and Brazil and compared the results to published field measurements. In the upper montane zone the model predicted a vegetation dieback, indicating a need for better parameterisation of cloud forest vegetation. In the lower montane and lowland zones simulated ecosystem productivity and respiration were predicted with reasonable accuracy, although not always within the error bounds of the observations. Model-predicted carbon use efficiency in this transect surprisingly did not increase with elevation, but remained close to the 'temperate' value 0.5. This may be explained by elevational changes in the balance between growth and maintenance respiration within the forest canopy, as controlled by both temperature- and pressure-mediated processes.

  8. [The effectiveness of the improvement of health in the schoolchildren staying in a country summer camp].

    PubMed

    Lir, D N; Perevalov, A Ya

    Organization of recreational activities in the children's camps is inseparable from the assessment of their effectiveness. The objective of the present study was to estimate the influence of the pastime of the children in a summer camp under the habitual climatic conditions and the resulting improvement of their health status including the body component composition and the functional state of the organism. The study included 44 schoolchildren at the age from 9 to12 years. The analysis of the effectiveness of recreational activities was carried out with the use of the method for the assessment of health improvement based at the children`s summer camps. Alterations in the component composition of the body were evaluated from the results of bioimpedansometry. The physical development of the majority of the schoolchildren involved in the study both in the beginning and the end of the camp period was fairly well balanced. During the period of resting in the camp (14 days), changes in the body weight were largely attributable to the alteration in the lean body mass whereas the fat component remained rather stable. The cardio-respiratory system did not show any unambiguous signs of positive dynamics. The physical conditions of the children estimated based on the hand dynamometry index showed a negative change. The comprehensive assessment of the degree of health improvement with the use of a scoring system made it possible to demonstrated that half of the schoolchildren spending time in the summer camp under the moderate climate conditions markedly improved their somatic health, functional and physical state whereas the remaining half enjoyed only a slight improvement. We suppose that the main causes preventing manifestations of the maximal positive effect of the pastime in the summer camp on the health status of the children included the short period of stay in the camp and the irrational use of the available complex of recreational activities, such as the sound nutrition regimen, adequate physical loading including locomotor activity, and psychological comfort). Bioimpedansometry which objectively reflects any changes in the body component composition is recommended for the application as one of the additional instruments for the objective analysis of the changes in the body component composition of the children and of the effects exerted by the recreational activities in the summer camp on their health status.

  9. Continuous nasogastric tube feeding: monitoring by combined use of refractometry and traditional gastric residual volumes.

    PubMed

    Chang, W-K; McClave, S-A; Chao, Y-C

    2004-02-01

    Traditional use of gastric residual volumes (GRVs) is insensitive and cannot distinguish retained enteral formula from the large volume of endogenous secretions. We designed this prospective study to determine whether refractometry and Brix value (BV) measurements could be used to monitor gastric emptying and tolerance in patients receiving continuous enteral feeding. Thirty-six patients on continuous nasogastric tube feeding were divided into two groups; patients with lower GRVs (<75 ml) in Group 1, patients with higher GRVs (>75 ml) in Group 2. Upon entry, all gastric contents were aspirated, the volume was recorded (Asp GRV), BV measurements were made by refractometry, and then the contents were reinstilled but diluted with 30 ml additional water. Finally, a small amount was reaspirated and repeat BV measurements were made. Three hours later, the entire procedure was repeated a second time. The BV ratio, calculated (Cal) GRV, and volume of formula remaining were calculated by derived equations. Mean BV ratios were significantly higher for those patients in Group 2 compared to those in Group 1. All but one of the 22 patients (95%) in Group 1 had a volume of formula remaining in the stomach estimated on both measurements to be less than the hourly infusion rate (all these patients had BV ratios <70%). In contrast, six of the 14 patients in Group 2 (43%) on both measurements were estimated to have volumes of formula remaining that were greater than the hourly infusion rate (all these patients had BV ratios >70%). Three of the Group 2 patients (21%) whose initial measurement showed evidence for retention of formula, improved on repeat follow-up measurement assuring adequate gastric emptying. The remaining five patients from Group 2 (35%) had a volume of formula remaining that was less than the hourly infusion rate on both measurements. The pattern of Asp GRVs and serial pre- and post-dilution BVs failed to differentiate these patients in Group 2 with potential emptying problems from those with sufficient gastric emptying. Refractometry and measurement of the BV may improve the clinical utilization of GRVs, by its ability to identify the component of formula within gastric contents and track changes in that component related to gastric emptying.

  10. Altered Methylation in Tandem Repeat Element and Elemental Component Levels in Inhalable Air Particles

    PubMed Central

    Hou, Lifang; Zhang, Xiao; Zheng, Yinan; Wang, Sheng; Dou, Chang; Guo, Liqiong; Byun, Hyang-Min; Motta, Valeria; McCracken, John; Díaz, Anaité; Kang, Choong-Min; Koutrakis, Petros; Bertazzi, Pier Alberto; Li, Jingyun; Schwartz, Joel; Baccarelli, Andrea A.

    2014-01-01

    Exposure to particulate matter (PM) has been associated with lung cancer risk in epidemiology investigations. Elemental components of PM have been suggested to have critical roles in PM toxicity, but the molecular mechanisms underlying their association with cancer risks remain poorly understood. DNA methylation has emerged as a promising biomarker for environmental-related diseases, including lung cancer. In this study, we evaluated the effects of PM elemental components on methylation of three tandem repeats in a highly-exposed population in Beijing, China. The Beijing Truck Driver Air Pollution Study was conducted shortly before the 2008 Beijing Olympic Games (June 15-July 27, 2008) and included 60 truck drivers and 60 office workers. On two days separated by 1-2 weeks, we measured blood DNA methylation of SATα, NBL2, D4Z4, and personal exposure to eight elemental components in PM2.5, including aluminum (Al), silicon (Si), sulfur (S), potassium (K), calcium (Ca) titanium (Ti), iron (Fe), and zinc (Zn). We estimated the associations of individual elemental component with each tandem repeat methylation in generalized estimating equations (GEE) models adjusted for PM2.5 mass and other covariates. Out of the eight examined elements, NBL2 methylation was positively associated with concentrations of Si (0.121, 95%CI: 0.030; 0.212, FDR=0.047) and Ca (0.065, 95%CI: 0.014; 0.115, FDR=0.047) in truck drivers. In office workers, SATα methylation was positively associated with concentrations of S (0.115, 95%CI: 0.034; 0.196, FDR=0.042). PM-associated differences in blood tandem-repeat methylation may help detect biological effects of the exposure and identify individuals who may eventually experience higher lung cancer risk. PMID:24273195

  11. Social conflicts elicit an N400-like component.

    PubMed

    Huang, Yi; Kendrick, Keith M; Yu, Rongjun

    2014-12-01

    When people have different opinions, they often adjust their own attitude to match that of others, known as social conformity. How social conflicts trigger subsequent conformity remains unclear. One possibility is that a conflict with the group opinion is perceived as a violation of social information, analogous to using wrong grammar, and activates conflict monitoring and adjustment mechanisms. Using event related potential (ERP) recording combined with a face attractiveness judgment task, we investigated the neural encoding of social conflicts. We found that social conflicts elicit an N400-like negative deflection, being more negative for conflict with group opinions than no-conflict condition. The social conflict related signals also have a bi-directional profile similar to reward prediction error signals: it was more negative for under-estimation (i.e. one׳s own ratings were smaller than group ratings) than over-estimation, and the larger the differences between ratings, the larger the N400 amplitude. The N400 effects were significantly diminished in the non-social condition. We conclude that social conflicts are encoded in a bidirectional fashion in the N400-like component, similar to the pattern of reward-based prediction error signals. Our findings also suggest that the N400, a well-established ERP component encoding semantic violation, might be involved in social conflict processing and social learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Influence of quality of life, self-perception, and self-esteem on orthodontic treatment need.

    PubMed

    Dos Santos, Patrícia R; Meneghim, Marcelo de C; Ambrosano, Glaucia M B; Filho, Mario Vedovello; Vedovello, Silvia A S

    2017-01-01

    In this study, we aimed to assess the relationship between normative and perceived orthodontic treatment need associated with quality of life, self-esteem, and self-perception. The sample included 248 schoolchildren aged 12 years. The normative aspect of orthodontic treatment was assessed by the Dental Health Component and the Aesthetic Component of the Index of Orthodontic Treatment Need. The subjects were further evaluated for their oral health-related quality of life, self-esteem, and self-perception of oral esthetics. The Aesthetic Component of the Index of Orthodontic Treatment Need was considered as the response variable, and generalized linear models estimated by the GENMOD procedure (release 9.3, 2010; SAS Institute, Cary, NC). Model 1 was estimated with only the intercept, providing the basis for evaluating the reduction in variance in the other models studied; then the variables were tested sequentially, considering P ≤0.05 as the criterion for remaining in the model. In the model, self-perception and self-esteem were statistically significant in relation to the perceived need for treatment. The normative need was significantly associated with the outcome variable and was not influenced by independent variables. The normative need for orthodontics treatment was not overestimated by the perceived need, and the perceived need was not influenced by sex and the impact on quality of life. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  13. Gender Differences of Occupational Stress Associated with Suicidal Ideation among South Korean Employees: The Kangbuk Samsung Health Study.

    PubMed

    Kim, Sun-Young; Shin, Dong-Won; Oh, Kang-Seob; Kim, Eun-Jin; Park, Yang-Ri; Shin, Young-Chul; Lim, Se-Won

    2018-02-01

    In this study, the relationship between occupational stress and suicidal ideation was investigated, focusing on gender differences among Korean employees. Cross-sectional data for 53,969 workers were collected at Kangbuk Samsung Hospital health screening centers. Risk of suicidal ideation was assessed using a self-reported questionnaire examining suicidal ideation during the past year. Occupational stress was measured using 24 items of the Korean Occupational Stress Scale-Short Form (KOSS-SF). Logistic regression analysis was employed to estimate the odds ratios and 95% confidence intervals of the relationships between suicidal ideation and components of occupational stress. In multivariable-adjusted models, all job stress contributed to increased risk of suicidal ideation in males. Most subscales, except insufficient job control and organizational system, were risk factors of suicidal ideation in females. Further adjustments for depression markedly attenuated this relationship. However, the effects of insufficient job control and lack of reward on suicidal ideation remained significant in males, and interpersonal conflict remained significant in females. The results suggest that occupational stress plays a significant role in increasing risk of suicidal ideation through elevation of depressive symptoms. Gender differences in components of occupational stress associated with suicidal ideation were also observed.

  14. An examination of effect estimation in factorial and standardly-tailored designs

    PubMed Central

    Allore, Heather G; Murphy, Terrence E

    2012-01-01

    Background Many clinical trials are designed to test an intervention arm against a control arm wherein all subjects are equally eligible for all interventional components. Factorial designs have extended this to test multiple intervention components and their interactions. A newer design referred to as a ‘standardly-tailored’ design, is a multicomponent interventional trial that applies individual interventional components to modify risk factors identified a priori and tests whether health outcomes differ between treatment arms. Standardly-tailored designs do not require that all subjects be eligible for every interventional component. Although standardly-tailored designs yield an estimate for the net effect of the multicomponent intervention, it has not yet been shown if they permit separate, unbiased estimation of individual component effects. The ability to estimate the most potent interventional components has direct bearing on conducting second stage translational research. Purpose We present statistical issues related to the estimation of individual component effects in trials of geriatric conditions using factorial and standardly-tailored designs. The medical community is interested in second stage translational research involving the transfer of results from a randomized clinical trial to a community setting. Before such research is undertaken, main effects and synergistic and or antagonistic interactions between them should be identified. Knowledge of the relative strength and direction of the effects of the individual components and their interactions facilitates the successful transfer of clinically significant findings and may potentially reduce the number of interventional components needed. Therefore the current inability of the standardly-tailored design to provide unbiased estimates of individual interventional components is a serious limitation in their applicability to second stage translational research. Methods We discuss estimation of individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components. Results Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology. Limitations Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components. Discussion Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis. Conclusion Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs. PMID:18375650

  15. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    PubMed Central

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  16. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  17. Estimating locations and total magnetization vectors of compact magnetic sources from scalar, vector, or tensor magnetic measurements through combined Helbig and Euler analysis

    USGS Publications Warehouse

    Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.

    2007-01-01

    The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.

  18. On the role of horizontal displacements in the exhumation of high pressure metamorphic rocks

    NASA Astrophysics Data System (ADS)

    Brun, J.-P.; Tirel, C.; Philippon, M.; Burov, E.; Faccenna, C.; Gueydan, F.; Lebedev, S.

    2012-04-01

    High pressure metamorphic rocks exposed in the core of many mountain belts correspond to various types of upper crustal materials that have been buried to mantle depths and, soon after, brought back to surface at mean displacement rates up to few cm/y, comparable to those of plate boundaries. The vertical component of HP rock exhumation velocity back to surface is commonly well constrained by pressure estimates from petrology and geochronological data whereas the horizontal component remains generally difficult or impossible to estimate. Consequently, most available models, if not all, attempt to simulate exhumation with a minimal horizontal component of displacement. Such models, require that the viscosity of HP rocks is low and/or the erosion rate large -i.e. at least equal to the rate of exhumation. However, in some regions like the Aegean, where the exhumation of blueschists and eclogites is driven by slab rollback, it can be shown that the horizontal component of exhumation related displacement, obtained from map view restoration, is 5 to 7 times larger than the vertical one, deduced from metamorphic pressure estimates. Using finite element models performed with FLAMAR, we show that such a situation simply results from the subduction of small continental blocks (< 500km) that stimulate subduction rollback. The continental block is dragged downward and sheared off the downgoing mantle slab by buoyancy force. Exhumation of the crustal block occurs through a one step Caterpillar-type walk, with the block's tail slipping along a basal décollement, approaching the head and making a large buckle, which then unrolls at surface as soon as the entire block is delaminated. Finally, the crustal block emplaces at surface in the space created by trench retreat. This process of exhumation requires neither rheological weakening of HP rocks nor high rates of erosion.

  19. Determining an empirical estimate of the tracking inconsistency component for true astrometric uncertainties

    NASA Astrophysics Data System (ADS)

    Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise

    2017-10-01

    The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.

  20. Addressing the selectivity issue of cobalt doped zinc oxide thin film iso-butane sensors: Conductance transients and principal component analyses

    NASA Astrophysics Data System (ADS)

    Ghosh, A.; Majumder, S. B.

    2017-07-01

    Iso-butane (i-C4H10) is one of the major components of liquefied petroleum gas which is used as fuel in domestic and industrial applications. Developing chemi-resistive selective i-C4H10 thin film sensors remains a major challenge. Two strategies were undertaken to differentiate carbon monoxide, hydrogen, and iso-butane gases from the measured conductance transients of cobalt doped zinc oxide thin films. Following the first strategy, the response and recovery transients of conductances in these gas environments are fitted using the Langmuir adsorption kinetic model to estimate the heat of adsorption, response time constant, and activation energies for adsorption (response) and desorption (recovery). Although these test gases have seemingly different vapor densities, molecular diameters, and reactivities, analyzing the estimated heat of adsorption and activation energies (for both adsorption and desorption), we could not differentiate these gases unequivocally. However, we have found that the lower the vapor density, the faster the response time irrespective of the test gas concentration. As a second strategy, we demonstrated that feature extraction of conductance transients (using fast Fourier transformation) in conjunction with the pattern recognition algorithm (principal component analysis) is more fruitful to address the cross-sensitivity of Co doped ZnO thin film sensors. We have found that although the dispersion among different concentrations of hydrogen and carbon monoxide could not be avoided, each of these three gases forms distinct clusters in the plot of principal component 2 versus 1 and therefore could easily be differentiated.

  1. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  2. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  3. Methods for estimating aboveground biomass and its components for five Pacific Northwest tree species

    Treesearch

    Krishna P. Poudel; Temesgen. Hailemariam

    2015-01-01

    Performance of three groups of methods to estimate total and/or component aboveground biomass was evaluated using the data collected from destructively sampled trees in different parts of Oregon. First group of methods used analytical approach to estimate total and component biomass using existing equations, and produced biased estimates for our dataset. The second...

  4. Changes in composition and porosity occurring during the thermal degradation of wood and wood components

    USGS Publications Warehouse

    Rutherford, David W.; Wershaw, Robert L.; Cox, Larry G.

    2005-01-01

    Samples of pine and poplar wood, pine bark, and purified cellulose and lignin were charred at temperatures ranging from 250?C to 500?C for times ranging from 1 hour to 168 hours. Changes in composition were examined by Fourier Transform Infrared (FTIR) and 13C Nuclear Magnetic Resonance (NMR) spectrometry, mass loss, and elemental composition (carbon, hydrogen, and oxygen) of the char. Structural changes were examined by changes in porosity as measured by nitrogen gas adsorption. 13C NMR spectrometry, mass loss, and elemental composition were combined to estimate the mass of aromatic and aliphatic carbon remaining in the char. Mass loss and elemental composition were combined to estimate the chemical composition of material lost for various time intervals of heating. These analyses showed that aliphatic components in the test materials were either lost or converted to aromatic carbon early in the charring process. Nitrogen adsorption showed that no porosity develops for any of the test materials with heating at 250?C, even though substantial loss of material and changes in composition occurred. Porosity development coincided with the loss of aromatic carbon, indicating that micropores were developing within a fused-ring matrix.

  5. Global and System-Specific Resting-State fMRI Fluctuations Are Uncorrelated: Principal Component Analysis Reveals Anti-Correlated Networks

    PubMed Central

    Carbonell, Felix; Bellec, Pierre

    2011-01-01

    Abstract The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)–based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations. PMID:22444074

  6. Migration of scattered teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Rondenay, S.

    1999-06-01

    The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.

  7. Theory-Based Parameterization of Semiotics for Measuring Pre-literacy Development

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.

    2013-09-01

    A probabilistic model was applied to problem of measuring pre-literacy in young children. First, semiotic philosophy and contemporary cognition research were conceptually integrated to establish theoretical foundations for rating 14 characteristics of children's drawings and narratives (N = 120). Then ratings were transformed with a Rasch model, which estimated linear item parameter values that accounted for 79 percent of rater variance. Principle Components Analysis of item residual matrix confirmed variance remaining after item calibration was largely unsystematic. Validation analyses found positive correlations between semiotic measures and preschool literacy outcomes. Practical implications of a semiotics dimension for preschool practice were discussed.

  8. Recovering TMS-evoked EEG responses masked by muscle artifacts.

    PubMed

    Mutanen, Tuomas P; Kukkonen, Matleena; Nieminen, Jaakko O; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2016-10-01

    Combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) often suffers from large muscle artifacts. Muscle artifacts can be removed using signal-space projection (SSP), but this can make the visual interpretation of the remaining EEG data difficult. We suggest to use an additional step after SSP that we call source-informed reconstruction (SIR). SSP-SIR improves substantially the signal quality of artifactual TMS-EEG data, causing minimal distortion in the neuronal signal components. In the SSP-SIR approach, we first project out the muscle artifact using SSP. Utilizing an anatomical model and the remaining signal, we estimate an equivalent source distribution in the brain. Finally, we map the obtained source estimate onto the original signal space, again using anatomical information. This approach restores the neuronal signals in the sensor space and interpolates EEG traces onto the completely rejected channels. The introduced algorithm efficiently suppresses TMS-related muscle artifacts in EEG while retaining well the neuronal EEG topographies and signals. With the presented method, we can remove muscle artifacts from TMS-EEG data and recover the underlying brain responses without compromising the readability of the signals of interest. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.

  10. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  11. The FIA Panel Design and Compatible Estimators for the Components of Change

    Treesearch

    Francis A. Roesch

    2006-01-01

    The FIA annual panel design and its relation to compatible estimation systems for the components of change are discussed. Estimation for the traditional components of growth, as presented by Meyer (1953, Forest Mensuration) is bypassed in favor of a focus on estimation for the discrete analogs to Eriksson’s (1995, For. Sci. 41(4):796- 822) time invariant redefinitions...

  12. Observations of Near-Field Rotational Motions from Oklahoma Seismicity using Applied Technology Associate Sensors

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Anthony, R. E.; Holland, A. A.; Wilson, D. C.

    2017-12-01

    Characterizing rotational motions from moderate-sized earthquakes in the near-field has the potential to improve earthquake engineering and seismic gradiometry by better characterizing the rotational component of the seismic wavefield, but has remained challenging due to the limited development of portable, low-noise rotational sensors. Here, we test Applied Technology Associate (ATA) Proto-Seismic Magnetohydrodynamic (SMHD) three-component rotational rate sensors at Albuquerque Seismological Laboratory (ASL) for self-noise and sensitivity before deploying them at U.S. Geological Survey (USGS) temporary aftershock station OK38 in Waynoka, Oklahoma. The sensors have low self-noise levels below 2 Hz, making them ideal to record local rotations. From April 11, 2017 to June 6, 2017 we recorded the translational and rotational motions of over 155 earthquakes of ML≥2.0 within 2 degrees of the station. Using the recorded events we compare Peak Ground Velocity (PGV) with Peak Ground Rotation Rate (PG). For example, we measured a maximal PG of 0.00211 radians/s and 0.00186 radians/s for the horizontal components of the two rotational sensors during the Mwr=4.2 event on May 13, 2017 which was 0.5 km from that station. Similarly, our PG for the vertical rotational components were 0.00112 radians/s and 0.00085 radians/s. We also measured Peak Ground Rotations (PGω) as a function of seismic moment, as well as mean vertical Power Spectral Density (PSD) with mean horizontal PSD power levels. We compute apparent phase velocity directly from the rotational data, which may have may improve estimates of local site effects. Finally, by comparing various rotational and translational components we look at potential implications for estimating local event source parameters, which may help in identifying phenomena such as repeating earthquakes by using differences in the rotational components correlation.

  13. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  14. A Model To Estimate Carbon Dioxide Injectivity and Storage Capacity for Geological Sequestration in Shale Gas Wells.

    PubMed

    Edwards, Ryan W J; Celia, Michael A; Bandilla, Karl W; Doster, Florian; Kanno, Cynthia M

    2015-08-04

    Recent studies suggest the possibility of CO2 sequestration in depleted shale gas formations, motivated by large storage capacity estimates in these formations. Questions remain regarding the dynamic response and practicality of injection of large amounts of CO2 into shale gas wells. A two-component (CO2 and CH4) model of gas flow in a shale gas formation including adsorption effects provides the basis to investigate the dynamics of CO2 injection. History-matching of gas production data allows for formation parameter estimation. Application to three shale gas-producing regions shows that CO2 can only be injected at low rates into individual wells and that individual well capacity is relatively small, despite significant capacity variation between shale plays. The estimated total capacity of an average Marcellus Shale well in Pennsylvania is 0.5 million metric tonnes (Mt) of CO2, compared with 0.15 Mt in an average Barnett Shale well. Applying the individual well estimates to the total number of existing and permitted planned wells (as of March, 2015) in each play yields a current estimated capacity of 7200-9600 Mt in the Marcellus Shale in Pennsylvania and 2100-3100 Mt in the Barnett Shale.

  15. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  16. In-vivo quantitative measurement of tissue oxygen saturation of human webbing using a transmission type continuous-wave near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Aizimu, Tuerxun; Adachi, Makoto; Nakano, Kazuya; Ohnishi, Takashi; Nakaguchi, Toshiya; Takahashi, Nozomi; Nakada, Taka-aki; Oda, Shigeto; Haneishi, Hideaki

    2018-02-01

    Near-infrared spectroscopy (NIRS) is a noninvasive method for monitoring tissue oxygen saturation (StO2). Many commercial NIRS devices are presently available. However, the precision of those devices is relatively poor because they are using the reflectance-model with which it is difficult to obtain the blood volume and other unchanged components of the tissue. Human webbing is a thin part of the hand and suitable to measure spectral transmittance. In this paper, we present a method for measuring StO2 of human webbing from a transmissive continuous-wave nearinfrared spectroscopy (CW-NIRS) data. The method is based on the modified Beer-Lambert law (MBL) and it consists of two steps. In the first step, we give a pressure to the upstream region of the measurement point to perturb the concentration of deoxy- and oxy-hemoglobin as remaining the other components and measure the spectral signals. From the measured data, spectral absorbance due to the components other than hemoglobin is calculated. In the second step, spectral measurement is performed at arbitrary time instance and the spectral absorbance obtained in the step 1 is subtracted from the measured absorbance. The tissue oxygen saturation (StO2) is estimated from the remained data. The method was evaluated on an arterial occlusion test (AOT) and a venous occlusion test (VOT). In the evaluation experiment, we confirmed that reasonable values of StO2 were obtained by the proposed method.

  17. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle.

    PubMed

    Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E

    2013-04-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  19. Comparative analysis of gene regulatory networks: from network reconstruction to evolution.

    PubMed

    Thompson, Dawn; Regev, Aviv; Roy, Sushmita

    2015-01-01

    Regulation of gene expression is central to many biological processes. Although reconstruction of regulatory circuits from genomic data alone is therefore desirable, this remains a major computational challenge. Comparative approaches that examine the conservation and divergence of circuits and their components across strains and species can help reconstruct circuits as well as provide insights into the evolution of gene regulatory processes and their adaptive contribution. In recent years, advances in genomic and computational tools have led to a wealth of methods for such analysis at the sequence, expression, pathway, module, and entire network level. Here, we review computational methods developed to study transcriptional regulatory networks using comparative genomics, from sequence to functional data. We highlight how these methods use evolutionary conservation and divergence to reliably detect regulatory components as well as estimate the extent and rate of divergence. Finally, we discuss the promise and open challenges in linking regulatory divergence to phenotypic divergence and adaptation.

  20. Wind Turbine Contingency Control Through Generator De-Rating

    NASA Technical Reports Server (NTRS)

    Frost, Susan; Goebel, Kai; Balas, Mark

    2013-01-01

    Maximizing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. In that context, systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage to the turbine. Advanced contingency control is one way to enable autonomous decision-making by providing the mechanism to enable safe and efficient turbine operation. The work reported herein explores the integration of condition monitoring of wind turbines with contingency control to balance the trade-offs between maintaining system health and energy capture. The contingency control involves de-rating the generator operating point to achieve reduced loads on the wind turbine. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.

  1. Evaluation of the robustness of estimating five components from a skin spectral image

    NASA Astrophysics Data System (ADS)

    Akaho, Rina; Hirose, Misa; Tsumura, Norimichi

    2018-04-01

    We evaluated the robustness of a method used to estimate five components (i.e., melanin, oxy-hemoglobin, deoxy-hemoglobin, shading, and surface reflectance) from the spectral reflectance of skin at five wavelengths against noise and a change in epidermis thickness. We also estimated the five components from recorded images of age spots and circles under the eyes using the method. We found that noise in the image must be no more 0.1% to accurately estimate the five components and that the thickness of the epidermis affects the estimation. We acquired the distribution of major causes for age spots and circles under the eyes by applying the method to recorded spectral images.

  2. U.S. War Costs: Two Parts Temporary, One Part Permanent.

    PubMed

    Edwards, Ryan D

    2014-05-01

    Military spending, fatalities, and the destruction of capital, all of which are immediately felt and are often large, are the most overt costs of war. They are also relatively short-lived. But the costs of war borne by combatants and their caretakers, which includes families, communities, and the modern welfare state, tend instead to be lifelong. In this paper I show that a significant component of the budgetary costs associated with U.S. wars is long-lived. One third to one half of the total present value of historical war costs are benefits distributed over the remaining life spans of veterans and their dependents. Even thirty years after the end of hostilities, typically half of all benefits remain to be paid. Estimates of the costs of injuries and deaths suggest that the private burden of war borne by survivors, namely the uncompensated costs of service-related injuries, are also large and long-lived.

  3. U.S. War Costs: Two Parts Temporary, One Part Permanent

    PubMed Central

    Edwards, Ryan D.

    2014-01-01

    Military spending, fatalities, and the destruction of capital, all of which are immediately felt and are often large, are the most overt costs of war. They are also relatively short-lived. But the costs of war borne by combatants and their caretakers, which includes families, communities, and the modern welfare state, tend instead to be lifelong. In this paper I show that a significant component of the budgetary costs associated with U.S. wars is long-lived. One third to one half of the total present value of historical war costs are benefits distributed over the remaining life spans of veterans and their dependents. Even thirty years after the end of hostilities, typically half of all benefits remain to be paid. Estimates of the costs of injuries and deaths suggest that the private burden of war borne by survivors, namely the uncompensated costs of service-related injuries, are also large and long-lived. PMID:25221367

  4. Basic visual dysfunction allows classification of patients with schizophrenia with exceptional accuracy.

    PubMed

    González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M

    2014-10-01

    Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  6. Cycling operation of fossil plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, U.S.; Weiss, M.D.; White, W.H.

    1991-05-01

    This report presents a methodology for examining the economic feasibility of converting fossil power plants from baseload to cycling service. It employs this approach to examine a proposed change of Pepco's Potomac River units 3, 4, and 5 from baseload operation of two-shift cycling. The project team first reviewed all components and listed potential cycling effects involved in the conversion of Potomac River units 3, 4, and 5. They developed general cycling plant screening criteria including the number of hot, warm, or cold restart per year and desired load ramp rates. In addition, they evaluated specific limitations on the boiler,more » turbine, and the balance of plant. They estimated the remaining life of the facility through component evaluation and boiler testing and also identified and prioritized potential component deficiencies by their impact on key operational factors: safety, heat rate, turn down, startup/shutdown time, and plant availability. They developed solutions to these problems; and, since many solutions mitigate more than one problem, they combined and reprioritized these synergistic solutions. Economic assessments were performed on all solutions. 13 figs., 20 tabs.« less

  7. Tuned by experience: How orientation probability modulates early perceptual processing.

    PubMed

    Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt

    2017-09-01

    Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive 'P300' component which might be related to either surprise or decision-making. However, the early 'C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Genetic Analysis of East Asian Grape Cultivars Suggests Hybridization with Wild Vitis.

    PubMed

    Goto-Yamamoto, Nami; Sawler, Jason; Myles, Sean

    2015-01-01

    Koshu is a grape cultivar native to Japan and is one of the country's most important cultivars for wine making. Koshu and other oriental grape cultivars are widely believed to belong to the European domesticated grape species Vitis vinifera. To verify the domesticated origin of Koshu and four other cultivars widely grown in China and Japan, we genotyped 48 ancestry informative single nucleotide polymorphisms (SNPs) and estimated wild and domesticated ancestry proportions. Our principal components analysis (PCA) based ancestry estimation revealed that Koshu is 70% V. vinifera, and that the remaining 30% of its ancestry is most likely derived from wild East Asian Vitis species. Partial sequencing of chloroplast DNA suggests that Koshu's maternal line is derived from the Chinese wild species V. davidii or a closely related species. Our results suggest that many traditional East Asian grape cultivars such as Koshu were generated from hybridization events with wild grape species.

  9. Melt damage simulation of W-macrobrush and divertor gaps after multiple transient events in ITER

    NASA Astrophysics Data System (ADS)

    Bazylev, B. N.; Janeschitz, G.; Landman, I. S.; Loarte, A.; Pestchanyi, S. E.

    2007-06-01

    Tungsten in the form of macrobrush structure is foreseen as one of two candidate materials for the ITER divertor and dome. In ITER, even for moderate and weak ELMs when a thin shielding layer does not protect the armour surface from the dumped plasma, the main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. The melt erosion of W-macrobrush targets with different geometry of brush surface under the heat loads caused by weak ELMs is numerically investigated using the modified code MEMOS. The optimal angle of brush surface inclination that provides a minimum of surface roughness is estimated for given inclination angles of impacting plasma stream and given parameters of the macrobrush target. For multiple disruptions the damage of the dome gaps and the gaps between divertor cassettes caused by the radiation impact is estimated.

  10. Ocean Data Assimilation in Support of Climate Applications: Status and Perspectives.

    PubMed

    Stammer, D; Balmaseda, M; Heimbach, P; Köhl, A; Weaver, A

    2016-01-01

    Ocean data assimilation brings together observations with known dynamics encapsulated in a circulation model to describe the time-varying ocean circulation. Its applications are manifold, ranging from marine and ecosystem forecasting to climate prediction and studies of the carbon cycle. Here, we address only climate applications, which range from improving our understanding of ocean circulation to estimating initial or boundary conditions and model parameters for ocean and climate forecasts. Because of differences in underlying methodologies, data assimilation products must be used judiciously and selected according to the specific purpose, as not all related inferences would be equally reliable. Further advances are expected from improved models and methods for estimating and representing error information in data assimilation systems. Ultimately, data assimilation into coupled climate system components is needed to support ocean and climate services. However, maintaining the infrastructure and expertise for sustained data assimilation remains challenging.

  11. Estimating the executive demands of a one-back choice reaction time task by means of the selective interference paradigm.

    PubMed

    Szmalec, Arnaud; Vandierendonck, André

    2007-08-01

    The present study proposes a new executive task, the one-back choice reaction time (RT) task, and implements the selective interference paradigm to estimate the executive demands of the processing components involved in this task. Based on the similarities between a one-back choice RT task and the n-back updating task, it was hypothesized that one-back delaying of a choice reaction involves executive control. In three experiments, framed within Baddeley's (1986) working-memory model, a one-back choice RT task, a choice RT task, articulatory suppression, and matrix tapping were performed concurrently with primary tasks involving verbal, visuospatial, and executive processing. The results demonstrate that one-back delaying of a choice reaction interferes with tasks requiring executive control, while the potential interference at the level of the verbal or visuospatial working memory slave systems remains minimal.

  12. Global distribution and surface activity of macromolecules in offline simulations of marine organic chemistry

    DOE PAGES

    Ogunro, Oluwaseun O.; Burrows, Susannah M.; Elliott, Scott; ...

    2015-10-13

    Here, organic macromolecules constitute high percentage components of remote sea spray. They enter the atmosphere through adsorption onto bubbles followed by bursting at the ocean surface, and go on to influence the chemistry of the fine mode aerosol. We present a global estimate of mixed-layer organic macromolecular distributions, driven by offline marine systems model output. The approach permits estimation of oceanic concentrations and bubble film surface coverages for several classes of organic compound. Mixed layer levels are computed from the output of a global ocean biogeochemistry model by relating the macromolecules to standard biogeochemical tracers. Steady state is assumed formore » labile forms, and for longer-lived components we rely on ratios to existing transported variables. Adsorption is then represented through conventional Langmuir isotherms, with equilibria deduced from laboratory analogs. Open water concentrations locally exceed one micromolar carbon for the total of protein, polysaccharide and refractory heteropolycondensate. The shorter-lived lipids remain confined to regions of strong biological activity. Results are evaluated against available measurements for all compound types, and agreement is generally quite reasonable. Global distributions are further estimated for both fractional coverage of bubble films at the air-water interface and the two-dimensional concentration excess. Overall, we show that macromolecular mapping provides a novel tool for the comprehension of oceanic surfactant distributions. Results may prove useful in planning field experiments and assessing the potential response of surface chemical behaviors to global change.« less

  13. Uncertainty in geocenter estimates in the context of ITRF2014

    NASA Astrophysics Data System (ADS)

    Riddell, Anna R.; King, Matt A.; Watson, Christopher S.; Sun, Yu; Riva, Riccardo E. M.; Rietbroek, Roelof

    2017-05-01

    Uncertainty in the geocenter position and its subsequent motion affects positioning estimates on the surface of the Earth and downstream products such as site velocities, particularly the vertical component. The current version of the International Terrestrial Reference Frame, ITRF2014, derives its origin as the long-term averaged center of mass as sensed by satellite laser ranging (SLR), and by definition, it adopts only linear motion of the origin with uncertainty determined using a white noise process. We compare weekly SLR translations relative to the ITRF2014 origin, with network translations estimated from station displacements from surface mass transport models. We find that the proportion of variance explained in SLR translations by the model-derived translations is on average less than 10%. Time-correlated noise and nonlinear rates, particularly evident in the Y and Z components of the SLR translations with respect to the ITRF2014 origin, are not fully replicated by the model-derived translations. This suggests that translation-related uncertainties are underestimated when a white noise model is adopted and that substantial systematic errors remain in the data defining the ITRF origin. When using a white noise model, we find uncertainties in the rate of SLR X, Y, and Z translations of ±0.03, ±0.03, and ±0.06, respectively, increasing to ±0.13, ±0.17, and ±0.33 (mm/yr, 1 sigma) when a power law and white noise model is adopted.

  14. Apparatus for determining past-service conditions and remaining life of thermal barrier coatings and components having such coatings

    DOEpatents

    Srivastava, Alok Mani; Setlur, Anant Achyut; Comanzo, Holly Ann; Devitt, John William; Ruud, James Anthony; Brewer, Luke Nathaniel

    2004-05-04

    An apparatus for determining past-service conditions and/or remaining useful life of a component of a combustion engine and/or a thermal barrier coating ("TBC") of the component comprises a radiation source that provides the exciting radiation to the TBC to excite a photoluminescent ("PL") material contained therein, a radiation detector for detecting radiation emitted by the PL material, and means for relating a characteristic of an emission spectrum of the PL material to the amount of a crystalline phase in the TBC, thereby inferring the past-service conditions or the remaining useful life of the component or the TBC.

  15. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  16. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  17. Gender Differences of Occupational Stress Associated with Suicidal Ideation among South Korean Employees: The Kangbuk Samsung Health Study

    PubMed Central

    Kim, Sun-Young; Shin, Dong-Won; Oh, Kang-Seob; Kim, Eun-Jin; Park, Yang-Ri; Shin, Young-Chul; Lim, Se-Won

    2018-01-01

    Objective In this study, the relationship between occupational stress and suicidal ideation was investigated, focusing on gender differences among Korean employees. Methods Cross-sectional data for 53,969 workers were collected at Kangbuk Samsung Hospital health screening centers. Risk of suicidal ideation was assessed using a self-reported questionnaire examining suicidal ideation during the past year. Occupational stress was measured using 24 items of the Korean Occupational Stress Scale-Short Form (KOSS-SF). Logistic regression analysis was employed to estimate the odds ratios and 95% confidence intervals of the relationships between suicidal ideation and components of occupational stress. Results In multivariable-adjusted models, all job stress contributed to increased risk of suicidal ideation in males. Most subscales, except insufficient job control and organizational system, were risk factors of suicidal ideation in females. Further adjustments for depression markedly attenuated this relationship. However, the effects of insufficient job control and lack of reward on suicidal ideation remained significant in males, and interpersonal conflict remained significant in females. Conclusion The results suggest that occupational stress plays a significant role in increasing risk of suicidal ideation through elevation of depressive symptoms. Gender differences in components of occupational stress associated with suicidal ideation were also observed. PMID:29475218

  18. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains.

    PubMed

    Woess, Claudia; Unterberger, Seraphin Hubert; Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43-at 450 cm-1 and ν4PO43- from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with time. Cluster-analyses of data from Raman microscopic imaging reconstructed histo-anatomical features in comparison to the light microscopic image and finally, by application of principal component analyses (PCA), it was possible to see a clear distinction between forensic and archaeological bone samples. Hence, the spectral characterization of inorganic and organic compounds by the afore mentioned techniques, followed by analyses such as multivariate imaging analysis (MIAs) and principal component analyses (PCA), appear to be suitable for the post mortem interval (PMI) estimation of human skeletal remains.

  19. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains

    PubMed Central

    Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43−at 450 cm-1 and ν4PO43− from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with time. Cluster-analyses of data from Raman microscopic imaging reconstructed histo-anatomical features in comparison to the light microscopic image and finally, by application of principal component analyses (PCA), it was possible to see a clear distinction between forensic and archaeological bone samples. Hence, the spectral characterization of inorganic and organic compounds by the afore mentioned techniques, followed by analyses such as multivariate imaging analysis (MIAs) and principal component analyses (PCA), appear to be suitable for the post mortem interval (PMI) estimation of human skeletal remains. PMID:28334006

  20. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  1. Methods for estimating aboveground biomass and its components for Douglas-fir and lodgepole pine trees

    Treesearch

    K.P. Poudel; H. Temesgen

    2016-01-01

    Estimating aboveground biomass and its components requires sound statistical formulation and evaluation. Using data collected from 55 destructively sampled trees in different parts of Oregon, we evaluated the performance of three groups of methods to estimate total aboveground biomass and (or) its components based on the bias and root mean squared error (RMSE) that...

  2. Comparison of Estimates from the 1995 National Household Education Survey (NHES:95). Working Paper Series.

    ERIC Educational Resources Information Center

    Kim, Kwang; Loomis, Laura S.; Collins, Mary A.

    This report compares selected estimates from the two components of the 1995 National Household Education Survey (NHES:95) with estimates from other survey data. The two different components of the NHES:95, the Adult Education (AE) and the Early Childhood Program Participation (ECPP) components, cover a variety of topics related to participation in…

  3. [Design Method Analysis and Performance Comparison of Wall Filter for Ultrasound Color Flow Imaging].

    PubMed

    Wang, Lutao; Xiao, Jun; Chai, Hua

    2015-08-01

    The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.

  4. 2-D Myocardial Deformation Imaging Based on RF-Based Nonrigid Image Registration.

    PubMed

    Chakraborty, Bidisha; Liu, Zhi; Heyde, Brecht; Luo, Jianwen; D'hooge, Jan

    2018-06-01

    Myocardial deformation imaging is a well-established echocardiographic technique for the assessment of myocardial function. Although some solutions make use of speckle tracking of the reconstructed B-mode images, others apply block matching (BM) on the underlying radio frequency (RF) data in order to increase sensitivity to small interframe motion and deformation. However, for both approaches, lateral motion estimation remains a challenge due to the relatively poor lateral resolution of the ultrasound image in combination with the lack of phase information in this direction. Hereto, nonrigid image registration (NRIR) of B-mode images has previously been proposed as an attractive solution. However, hereby, the advantages of RF-based tracking were lost. The aim of this paper was, therefore, to develop an NRIR motion estimator adapted to RF data sets. The accuracy of this estimator was quantified using synthetic data and was contrasted against a state-of-the-art BM solution. The results show that RF-based NRIR outperforms BM in terms of tracking accuracy, particularly, as hypothesized, in the lateral direction. Finally, this RF-based NRIR algorithm was applied clinically, illustrating its ability to estimate both in-plane velocity components in vivo.

  5. Linking soil type and rainfall characteristics towards estimation of surface evaporative capacitance

    NASA Astrophysics Data System (ADS)

    Or, D.; Bickel, S.; Lehmann, P.

    2017-12-01

    Separation of evapotranspiration (ET) to evaporation (E) and transpiration (T) components for attribution of surface fluxes or for assessment of isotope fractionation in groundwater remains a challenge. Regional estimates of soil evaporation often rely on plant-based (Penman-Monteith) ET estimates where is E is obtained as a residual or a fraction of potential evaporation. We propose a novel method for estimating E from soil-specific properties, regional rainfall characteristics and considering concurrent internal drainage that shelters soil water from evaporation. A soil-dependent evaporative characteristic length defines a depth below which soil water cannot be pulled to the surface by capillarity; this depth determines the maximal soil evaporative capacitance (SEC). The SEC is recharged by rainfall and subsequently emptied by competition between drainage and surface evaporation (considering canopy interception evaporation). We show that E is strongly dependent on rainfall characteristics (mean annual, number of storms) and soil textural type, with up to 50% of rainfall lost to evaporation in loamy soil. The SEC concept applied to different soil types and climatic regions offers direct bounds on regional surface evaporation independent of plant-based parameterization or energy balance calculations.

  6. Assessing flood damage to agriculture using color infrared aerial photography

    USGS Publications Warehouse

    Anderson, William H.

    1977-01-01

    The rationale for using color-infrared (CIR) film to assist in assessing flood damage to agriculture is demonstrated using examples prepared from photographs acquired of the 1975 flood in the Red River Valley of North Dakota and Minnesota. Information concerning flood inundation boundaries, crop damage, soil erosion, sedimentation, and other similar general features and conditions was obtained through the interpretation of CIR aerial photographs. CIR aerial photographs can be used to help improve the estimates of potential remaining production on a field by field basis, owing to the increased accuracy obtained in determining the area component of crop production as compared to conventional ground sketching methods.

  7. Sexual function in elderly women: a review of current literature.

    PubMed

    Ambler, Dana R; Bieber, Eric J; Diamond, Michael P

    2012-01-01

    Although sexuality remains an important component of emotional and physical intimacy that most men and women desire to experience throughout their lives, sexual dysfunction in women is a problem that is not well studied. The prevalence of sexual dysfunction among all women is estimated to be between 25% and 63%; the prevalence in postmenopausal women is even higher, with rates between 68% and 86.5%. Increasing recognition of this common problem and future research in this field may alter perceptions about sexuality, dismiss taboo and incorrect thoughts on sexual dysfunction, and spark better management for patients, allowing them to live more enjoyable lives.

  8. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  9. Composition of Impact Melt Debris from the Eltanin Impact Strewn Field, Bellingshausen Sea

    NASA Technical Reports Server (NTRS)

    Kyte, Frank T.

    2002-01-01

    The impact of the km-sized Eltanin asteroid into the Bellingshausen Sea produced mm- to cm-sized vesicular impact melt-rock particles found in sediment cores across a large area of the ocean floor. These particles are composed mainly of olivine and glass with minor chromite and traces of NiFe-sulfides. Some particles have inclusions of unmelted mineral and rock fragments from the precursor asteroid. Although all samples of melt rock examined have experienced significant alteration since their deposition in the late Pliocene, a significant portion of these particles have interiors that remain pristine and can be used to estimate the bulk composition of the impact melt. The bulk composition of the melt-rock particles is similar to the composition of basaltic meteorites such as howardites or mesosiderite silicates, with a contribution from seawater salts and a siderophile-rich component. There is no evidence that the Eltanin impact melt contains a significant terrestrial silicate component that might have been incorporated by mixing of the projectile with oceanic crust. If terrestrial silicates were incorporated into the melt, then their contribution must be much less than 10 wt%. Since excess K, Na, and CI are not present in seawater proportions, uptake of these elements into the melt must have been greatest for K and least for CI, producing a K/CI ratio about 4 times that in seawater. After correcting for the seawater component, the bulk composition of the Eltanin impact melt provides the best estimate of the bulk composition of the Eltanin asteroid. Excess Fe in the impact melt, relative to that in howardites, must be from a significant metal phase in the parent asteroid. Although the estimated Fe:Ni:Ir ratios (8:1:4 x 10(exp -5)) are similar to those in mesosiderite metal nodules (10:1:6 x 10(exp -5), excess Co and Au by factors of about 2 and 10 times, respectively, imply a metal component distinct from that in typical mesosiderites. An alternative interpretation, that siderophiles have been highly fractionated from a mesosiderite source, would require loss of about 90% of the original metal from the impact melt and the sediments, and is unsupported by any observational data. More likely, the excess Fe in the melt rocks is 'representative of the amount of metal in the impacting asteroid, which is estimated to be 4+/- 1 wt%.

  10. Chlorophyll induced fluorescence retrieved from GOME2 for improving gross primary productivity estimates of vegetation

    NASA Astrophysics Data System (ADS)

    van Leth, Thomas C.; Verstraeten, Willem W.; Sanders, Abram F. J.

    2014-05-01

    Mapping terrestrial chlorophyll fluorescence is a crucial activity to obtain information on the functional status of vegetation and to improve estimates of light-use efficiency (LUE) and global primary productivity (GPP). GPP quantifies carbon fixation by plant ecosystems and is therefore an important parameter for budgeting terrestrial carbon cycles. Satellite remote sensing offers an excellent tool for investigating GPP in a spatially explicit fashion across different scales of observation. The GPP estimates, however, still remain largely uncertain due to biotic and abiotic factors that influence plant production. Sun-induced fluorescence has the ability to enhance our knowledge on how environmentally induced changes affect the LUE. This can be linked to optical derived remote sensing parameters thereby reducing the uncertainty in GPP estimates. Satellite measurements provide a relatively new perspective on global sun-induced fluorescence, enabling us to quantify spatial distributions and changes over time. Techniques have recently been developed to retrieve fluorescence emissions from hyperspectral satellite measurements. We use data from the Global Ozone Monitoring Instrument 2 (GOME2) to infer terrestrial fluorescence. The spectral signatures of three basic components atmospheric: absorption, surface reflectance, and fluorescence radiance are separated using reference measurements of non-fluorescent surfaces (desserts, deep oceans and ice) to solve for the atmospheric absorption. An empirically based principal component analysis (PCA) approach is applied similar to that of Joiner et al. (2013, ACP). Here we show our first global maps of the GOME2 retrievals of chlorophyll fluorescence. First results indicate fluorescence distributions that are similar with that obtained by GOSAT and GOME2 as reported by Joiner et al. (2013, ACP), although we find slightly higher values. In view of optimizing the fluorescence retrieval, we will show the effect of the references selection procedure on the retrieval product.

  11. TWO DISTINCT-ABSORPTION X-RAY COMPONENTS FROM TYPE IIn SUPERNOVAE: EVIDENCE FOR ASPHERICITY IN THE CIRCUMSTELLAR MEDIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuda, Satoru; Tsuboi, Yohko; Maeda, Keiichi

    2016-12-01

    We present multi-epoch X-ray spectral observations of three Type IIn supernovae (SNe), SN 2005kd, SN 2006jd, and SN 2010jl, acquired with Chandra , XMM-Newton , Suzaku , and Swift . Previous extensive X-ray studies of SN 2010jl have revealed that X-ray spectra are dominated by thermal emission, which likely arises from a hot plasma heated by a forward shock propagating into a massive circumstellar medium (CSM). Interestingly, an additional soft X-ray component was required to reproduce the spectra at a period of ∼1–2 years after the SN explosion. Although this component is likely associated with the SN, its origin remained an open question. Wemore » find a similar, additional soft X-ray component from the other two SNe IIn as well. Given this finding, we present a new interpretation for the origin of this component; it is thermal emission from a forward shock essentially identical to the hard X-ray component, but directly reaches us from a void of the dense CSM. Namely, the hard and soft components are responsible for the heavily and moderately absorbed components, respectively. The co-existence of the two components with distinct absorptions as well as the delayed emergence of the moderately absorbed X-ray component could be evidence for asphericity of the CSM. We show that the X-ray spectral evolution can be qualitatively explained by considering a torus-like geometry for the dense CSM. Based on our X-ray spectral analyses, we estimate the radius of the torus-like CSM to be on the order of ∼5 × 10{sup 16} cm.« less

  12. Voltage-Sensitive Fluorescence of Indocyanine Green in the Heart

    PubMed Central

    Martišienė, Irma; Mačianskienė, Regina; Treinys, Rimantas; Navalinskas, Antanas; Almanaitytė, Mantė; Karčiauskas, Dainius; Kučinskas, Audrius; Grigalevičiūtė, Ramunė; Zigmantaitė, Vilma; Benetis, Rimantas; Jurevičius, Jonas

    2016-01-01

    So far, the optical mapping of cardiac electrical signals using voltage-sensitive fluorescent dyes has only been performed in experimental studies because these dyes are not yet approved for clinical use. It was recently reported that the well-known and widely used fluorescent dye indocyanine green (ICG), which has FDA approval, exhibits voltage sensitivity in various tissues, thus raising hopes that electrical activity could be optically mapped in the clinic. The aim of this study was to explore the possibility of using ICG to monitor cardiac electrical activity. Optical mapping experiments were performed on Langendorff rabbit hearts stained with ICG and perfused with electromechanical uncouplers. The residual contraction force and electrical action potentials were recorded simultaneously. Our research confirms that ICG is a voltage-sensitive dye with a dual-component (fast and slow) response to membrane potential changes. The fast component of the optical signal (OS) can have opposite polarities in different parts of the fluorescence spectrum. In contrast, the polarity of the slow component remains the same throughout the entire spectrum. Separating the OS into these components revealed two different voltage-sensitivity mechanisms for ICG. The fast component of the OS appears to be electrochromic in nature, whereas the slow component may arise from the redistribution of the dye molecules within or around the membrane. Both components quite accurately track the time of electrical signal propagation, but only the fast component is suitable for estimating the shape and duration of action potentials. Because ICG has voltage-sensitive properties in the entire heart, we suggest that it can be used to monitor cardiac electrical behavior in the clinic. PMID:26840736

  13. Metabolic factors and genetic risk mediate familial type 2 diabetes risk in the Framingham Heart Study

    PubMed Central

    Raghavan, Sridharan; Porneala, Bianca; McKeown, Nicola; Fox, Caroline S.; Dupuis, Josée; Meigs, James B.

    2015-01-01

    Aims/hypothesis Type 2 diabetes mellitus in parents is a strong determinant of diabetes risk in their offspring. We hypothesise that offspring diabetes risk associated with parental diabetes is mediated by metabolic risk factors. Methods We studied initially non-diabetic participants of the Framingham Offspring Study. Metabolic risk was estimated using beta cell corrected insulin response (CIR), HOMA-IR or a count of metabolic syndrome components (metabolic syndrome score [MSS]). Dietary risk and physical activity were estimated using questionnaire responses. Genetic risk score (GRS) was estimated as the count of 62 type 2 diabetes risk alleles. The outcome of incident diabetes in offspring was examined across levels of parental diabetes exposure, accounting for sibling correlation and adjusting for age, sex and putative mediators. The proportion mediated was estimated by comparing regression coefficients for parental diabetes with (βadj) and without (βunadj) adjustments for CIR, HOMA-IR, MSS and GRS (percentage mediated = 1 – βadj / βunadj). Results Metabolic factors mediated 11% of offspring diabetes risk associated with parental diabetes, corresponding to a reduction in OR per diabetic parent from 2.13 to 1.96. GRS mediated 9% of risk, corresponding to a reduction in OR per diabetic parent from 2.13 to 1.99. Conclusions/interpretation Metabolic risk factors partially mediated offspring type 2 diabetes risk conferred by parental diabetes to a similar magnitude as genetic risk. However, a substantial proportion of offspring diabetes risk associated with parental diabetes remains unexplained by metabolic factors, genetic risk, diet and physical activity, suggesting that important familial influences on diabetes risk remain undiscovered. PMID:25619168

  14. Soil Moisture or Groundwater?

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2017-12-01

    Partitioning the vertically integrated water storage variations estimated from GRACE satellite data into the components of which it is comprised requires independent information. Land surface models, which simulate the transfer and storage of moisture and energy at the land surface, are often used to estimate water storage variability of snow, surface water, and soil moisture. To obtain an estimate of changes in groundwater, the estimates of these storage components are removed from GRACE data. Biases in the modeled water storage components are therefore present in the residual groundwater estimate. In this study, we examine how soil moisture variability, estimated using the Community Land Model (CLM), depends on the vertical structure of the model. We then explore the implications of this uncertainty in the context of estimating groundwater variations using GRACE data.

  15. Managing American Oystercatcher (Haematopus palliatus) population qrowth by targeting nesting season vital rates

    USGS Publications Warehouse

    Felton, Shilo K.; Hostetter, Nathan J.; Pollock, Kenneth H.; Simons, Theodore R.

    2017-01-01

    In populations of long-lived species, adult survival typically has a relatively high influence on population growth. From a management perspective, however, adult survival can be difficult to increase in some instances, so other component rates must be considered to reverse population declines. In North Carolina, USA, management to conserve the American Oystercatcher (Haematopus palliatus) targets component vital rates related to fecundity, specifically nest and chick survival. The effectiveness of such a management approach in North Carolina was assessed by creating a three-stage female-based deterministic matrix model. Isoclines were produced from the matrix model to evaluate minimum nest and chick survival rates necessary to reverse population decline, assuming all other vital rates remained stable at mean values. Assuming accurate vital rates, breeding populations within North Carolina appear to be declining. To reverse this decline, combined nest and chick survival would need to increase from 0.14 to ≤ 0.27, a rate that appears to be attainable based on historical estimates. Results are heavily dependent on assumptions of other vital rates, most notably adult survival, revealing the need for accurate estimates of all vital rates to inform management actions. This approach provides valuable insights for evaluating conservation goals for species of concern.

  16. Translating Benzodiazepine Utilization Data into Meaningful Population Exposure: Integration of Two Metrics for Improved Reporting.

    PubMed

    Brandt, Jaden; Alkabanni, Wajd; Alessi-Severini, Silvia; Leong, Christine

    2018-04-04

    Drug utilization research on benzodiazepines remains important for measuring trends in consumption within and across borders over time for the sake of monitoring prescribing patterns and identifying potential population safety concerns. The defined daily dose (DDD) system by the World Health Organization (WHO) remains the internationally accepted standard for measuring drug consumption; however, beyond consumption, DDD-based results are difficult to interpret when individual agents are compared with one another or are pooled into a total class-based estimate. The diazepam milligram equivalent (DME) system provides approximate conversions between benzodiazepines and Z-drugs (i.e. zopiclone, zolpidem, zaleplon) based on their pharmacologic potency. Despite this, conversion of total dispensed benzodiazepine quantities into DME values retains diazepam milligrams as the total unit of measurement, which is also impractical for population-level interpretation. In this paper, we propose the use of an integrated DME-DDD metric to obviate the limitations encountered when the component metrics are used in isolation. Through a case example, we demonstrate significant change in results between the DDD and DME-DDD method. Unlike the DDD method, the integrated DME-DDD metric offers estimation of population pharmacologic exposure, and enables superior interpretation of drug utilization results, especially for drug class summary reporting.

  17. Estimation of fatigue life using electromechanical impedance technique

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Soh, Chee Kiong

    2010-04-01

    Fatigue induced damage is often progressive and gradual in nature. Structures subjected to large number of fatigue load cycles will encounter the process of progressive crack initiation, propagation and finally fracture. Monitoring of structural health, especially for the critical components, is therefore essential for early detection of potential harmful crack. Recent advent of smart materials such as piezo-impedance transducer adopting the electromechanical impedance (EMI) technique and wave propagation technique are well proven to be effective in incipient damage detection and characterization. Exceptional advantages such as autonomous, real-time and online, remote monitoring may provide a cost-effective alternative to the conventional structural health monitoring (SHM) techniques. In this study, the main focus is to investigate the feasibility of characterizing a propagating fatigue crack in a structure using the EMI technique as well as estimating its remaining fatigue life using the linear elastic fracture mechanics (LEFM) approach. Uniaxial cyclic tensile load is applied on a lab-sized aluminum beam up to failure. Progressive shift in admittance signatures measured by the piezo-impedance transducer (PZT patch) corresponding to increase of loading cycles reflects effectiveness of the EMI technique in tracing the process of fatigue damage progression. With the use of LEFM, prediction of the remaining life of the structure at different cycles of loading is possible.

  18. Metropolitan migration and population growth in selected developing countries.

    PubMed

    1983-01-01

    The purpose of this article is to estimate the components of metropolitan population growth in selected developing countries during 1960-1970 period. The study examines population growth in 26 cities: 5 are in Africa, 8 in Asia, and 13 in Latin America, using data from national census publications. These cities in general are the political capitals of their countries, but some additional large cities were selected in Brazil, Mexico, and South Africa. All cities, at the beginning of the 1960-1970 decade had over 500,000 population; Accra, the only exception, reached this population level during the 1960s. Some cities had over 4 million residents in 1970. Net migration contributed about 37% to total metropolitan population growth; the remainder of the growth is attributable to natural increase. Migration has a much stronger impact on metropolitan growth than suggested by the above figure: 1) Several metropolitan areas, for various reasons, are unlikely to receive many migrants; without those cities, the share of metropolitan growth from net migration is 44%. 2) Estimates of the natural increase of migrants after their arrival in the metropolitan areas, when added to migration itself, changes the total contribution of migration to 49% in some metropolitan areas. 3) Even where net migration contributes a smaller proportion to metropolitan growth than natural increase, the rates of net migration are generally high and should be viewed in the context of rapid metropolitan population growth from natural increase alone. Finally, the paper also compares the components of metropolitan growth with the components of growth in the remaining urban areas. The results show that the metropolitan areas, in general, grow faster than the remaining urban areas, and that this more rapid growth is mostly due to a higher rate of net migration. Given the significance of migration for metropolitan growth, further investigations of the effects of these migration streams, particularly with respect to in-migration and out-migration, would greatly benefit understanding of the detailed and interconnected process of population growth, migration, employment and social welfare of city residents.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  20. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    PubMed

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  1. Local Regularity Analysis with Wavelet Transform in Gear Tooth Failure Detection

    NASA Astrophysics Data System (ADS)

    Nissilä, Juhani

    2017-09-01

    Diagnosing gear tooth and bearing failures in industrial power transition situations has been studied a lot but challenges still remain. This study aims to look at the problem from a more theoretical perspective. Our goal is to find out if the local regularity i.e. smoothness of the measured signal can be estimated from the vibrations of epicyclic gearboxes and if the regularity can be linked to the meshing events of the gear teeth. Previously it has been shown that the decreasing local regularity of the measured acceleration signals can reveal the inner race faults in slowly rotating bearings. The local regularity is estimated from the modulus maxima ridges of the signal's wavelet transform. In this study, the measurements come from the epicyclic gearboxes of the Kelukoski water power station (WPS). The very stable rotational speed of the WPS makes it possible to deduce that the gear mesh frequencies of the WPS and a frequency related to the rotation of the turbine blades are the most significant components in the spectra of the estimated local regularity signals.

  2. Analysis of identification of digital images from a map of cosmic microwaves

    NASA Astrophysics Data System (ADS)

    Skeivalas, J.; Turla, V.; Jurevicius, M.; Viselga, G.

    2018-04-01

    This paper discusses identification of digital images from the cosmic microwave background radiation map formed according to the data of the European Space Agency "Planck" telescope by applying covariance functions and wavelet theory. The estimates of covariance functions of two digital images or single images are calculated according to the random functions formed of the digital images in the form of pixel vectors. The estimates of pixel vectors are formed on expansion of the pixel arrays of the digital images by a single vector. When the scale of a digital image is varied, the frequencies of single-pixel color waves remain constant and the procedure for calculation of covariance functions is not affected. For identification of the images, the RGB format spectrum has been applied. The impact of RGB spectrum components and the color tensor on the estimates of covariance functions was analyzed. The identity of digital images is assessed according to the changes in the values of the correlation coefficients in a certain range of values by applying the developed computer program.

  3. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    PubMed Central

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  4. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    PubMed

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  6. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  7. A regression-adjusted approach can estimate competing biomass

    Treesearch

    James H. Miller

    1983-01-01

    A method is presented for estimating above-ground herbaceous and woody biomass on competition research plots. On a set of destructively-sampled plots, an ocular estimate of biomass by vegetative component is first made, after which vegetation is clipped, dried, and weighed. Linear regressions are then calculated for each component between estimated and actual weights...

  8. Ethiopian Genetic Diversity Reveals Linguistic Stratification and Complex Influences on the Ethiopian Gene Pool

    PubMed Central

    Pagani, Luca; Kivisild, Toomas; Tarekegn, Ayele; Ekong, Rosemary; Plaster, Chris; Gallego Romero, Irene; Ayub, Qasim; Mehdi, S. Qasim; Thomas, Mark G.; Luiselli, Donata; Bekele, Endashaw; Bradman, Neil; Balding, David J.; Tyler-Smith, Chris

    2012-01-01

    Humans and their ancestors have traversed the Ethiopian landscape for millions of years, and present-day Ethiopians show great cultural, linguistic, and historical diversity, which makes them essential for understanding African variability and human origins. We genotyped 235 individuals from ten Ethiopian and two neighboring (South Sudanese and Somali) populations on an Illumina Omni 1M chip. Genotypes were compared with published data from several African and non-African populations. Principal-component and STRUCTURE-like analyses confirmed substantial genetic diversity both within and between populations, and revealed a match between genetic data and linguistic affiliation. Using comparisons with African and non-African reference samples in 40-SNP genomic windows, we identified “African” and “non-African” haplotypic components for each Ethiopian individual. The non-African component, which includes the SLC24A5 allele associated with light skin pigmentation in Europeans, may represent gene flow into Africa, which we estimate to have occurred ∼3 thousand years ago (kya). The non-African component was found to be more similar to populations inhabiting the Levant rather than the Arabian Peninsula, but the principal route for the expansion out of Africa ∼60 kya remains unresolved. Linkage-disequilibrium decay with genomic distance was less rapid in both the whole genome and the African component than in southern African samples, suggesting a less ancient history for Ethiopian populations. PMID:22726845

  9. Predicting the remaining service life of concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, J.F.

    1991-11-01

    Nuclear power plants are providing, currently, about 17 percent of the U.S. electricity and many of these plants are approaching their licensed life of 40 years. The U.S. Nuclear Regulatory Commission and the Department of Energy`s Oak Ridge National Laboratory are carrying out a program to develop a methodology for assessing the remaining safe-life of the concrete components and structures in nuclear power plants. This program has the overall objective of identifying potential structural safety issues, as well as acceptance criteria, for use in evaluations of nuclear power plants for continued service. The National Institute of Standards and Technology (NIST)more » is contributing to this program by identifying and analyzing methods for predicting the remaining life of in-service concrete materials. This report examines the basis for predicting the remaining service lives of concrete materials of nuclear power facilities. Methods for predicting the service life of new and in-service concrete materials are analyzed. These methods include (1) estimates based on experience, (2) comparison of performance, (3) accelerated testing, (4) stochastic methods, and (5) mathematical modeling. New approaches for predicting the remaining service lives of concrete materials are proposed and recommendations for their further development given. Degradation processes are discussed based on considerations of their mechanisms, likelihood of occurrence, manifestations, and detection. They include corrosion, sulfate attack, alkali-aggregate reactions, frost attack, leaching, radiation, salt crystallization, and microbiological attack.« less

  10. Short-term associations of cause-specific emergency hospitalizations and particulate matter chemical components in Hong Kong.

    PubMed

    Pun, Vivian Chit; Yu, Ignatius Tak-Sun; Qiu, Hong; Ho, Kin-Fai; Sun, Zhiwei; Louie, Peter K K; Wong, Tze Wai; Tian, Linwei

    2014-05-01

    Despite an increasing number of recent studies, the overall epidemiologic evidence associating specific particulate matter chemical components with health outcomes has been mixed. The links between components and hospitalizations have rarely been examined in Asia. We estimated associations between exposures to 18 chemical components of particulate matter with aerodynamic diameter less than 10 μm (PM10) and daily emergency cardiorespiratory hospitalizations in Hong Kong, China, between 2001 and 2007. Carbonaceous particulate matter, sulfate, nitrate, and ammonium accounted for two-thirds of the PM10 mass. After adjustment for time-varying confounders, a 3.4-μg/m(3) increment in 2-day moving average of same-day and previous-day nitrate concentrations was associated with the largest increase of 1.32% (95% confidence interval: 0.73, 1.92) in cardiovascular hospitalizations; elevation in manganese level (0.02 μg/m(3)) was linked to a 0.91% (95% confidence interval: 0.19, 1.64) increase in respiratory hospitalizations. Upon further adjustment for gaseous copollutants, nitrate, sodium ion, chloride ion, magnesium, and nickel remained significantly associated with cardiovascular hospitalizations, whereas sodium ion, aluminum, and magnesium, components abundantly found in coarser PM10, were associated with respiratory hospitalizations. Most positive links were seen during the cold season. These findings lend support to the growing body of literature concerning the health associations of particulate matter composition and provide important insight into the differential health risks of components found in fine and coarse modes of PM10.

  11. Quantitative Analysis of Hepatitis C NS5A Viral Protein Dynamics on the ER Surface.

    PubMed

    Knodel, Markus M; Nägel, Arne; Reiter, Sebastian; Vogel, Andreas; Targett-Adams, Paul; McLauchlan, John; Herrmann, Eva; Wittum, Gabriel

    2018-01-08

    Exploring biophysical properties of virus-encoded components and their requirement for virus replication is an exciting new area of interdisciplinary virological research. To date, spatial resolution has only rarely been analyzed in computational/biophysical descriptions of virus replication dynamics. However, it is widely acknowledged that intracellular spatial dependence is a crucial component of virus life cycles. The hepatitis C virus-encoded NS5A protein is an endoplasmatic reticulum (ER)-anchored viral protein and an essential component of the virus replication machinery. Therefore, we simulate NS5A dynamics on realistic reconstructed, curved ER surfaces by means of surface partial differential equations (sPDE) upon unstructured grids. We match the in silico NS5A diffusion constant such that the NS5A sPDE simulation data reproduce experimental NS5A fluorescence recovery after photobleaching (FRAP) time series data. This parameter estimation yields the NS5A diffusion constant. Such parameters are needed for spatial models of HCV dynamics, which we are developing in parallel but remain qualitative at this stage. Thus, our present study likely provides the first quantitative biophysical description of the movement of a viral component. Our spatio-temporal resolved ansatz paves new ways for understanding intricate spatial-defined processes central to specfic aspects of virus life cycles.

  12. Gas inflows towards the nucleus of NGC 1358

    NASA Astrophysics Data System (ADS)

    Schnorr-Müller, Allan; Storchi-Bergmann, Thaisa; Nagar, Neil M.; Robinson, Andrew; Lena, Davide

    2017-11-01

    We use optical spectra from the inner 1.8 × 2.5 kpc2 of the Seyfert 2 galaxy NGC 1358, obtained with the GMOS integral field spectrograph on the Gemini South telescope at a spatial resolution of ≈ 165 pc, to assess the feeding and feedback processes in this nearby active galaxy. Five gaseous kinematical components are observed in the emission line profiles. One of the components is present in the entire field-of-view and we interpret it as due to gas rotating in the disc of the galaxy. Three of the remaining components we interpret as associated with active galactic nucleus (AGN) feedback: a compact unresolved outflow in the inner 1 arcsec and two gas clouds observed at opposite sides of the nucleus, which we propose have been ejected in a previous AGN burst. The disc component velocity field is strongly disturbed by a large-scale bar. The subtraction of a velocity model combining both rotation and bar flows reveals three kinematic nuclear spiral arms: two in inflow and one in outflow. We estimate the mass inflow rate in the inner 180 pc obtaining \\dot{M}_{in} ≈ 1.5 × 10-2 M⊙ yr-1, about 160 times larger than the accretion rate necessary to power this AGN.

  13. Quantitative Analysis of Hepatitis C NS5A Viral Protein Dynamics on the ER Surface

    PubMed Central

    Nägel, Arne; Reiter, Sebastian; Vogel, Andreas; McLauchlan, John; Herrmann, Eva; Wittum, Gabriel

    2018-01-01

    Exploring biophysical properties of virus-encoded components and their requirement for virus replication is an exciting new area of interdisciplinary virological research. To date, spatial resolution has only rarely been analyzed in computational/biophysical descriptions of virus replication dynamics. However, it is widely acknowledged that intracellular spatial dependence is a crucial component of virus life cycles. The hepatitis C virus-encoded NS5A protein is an endoplasmatic reticulum (ER)-anchored viral protein and an essential component of the virus replication machinery. Therefore, we simulate NS5A dynamics on realistic reconstructed, curved ER surfaces by means of surface partial differential equations (sPDE) upon unstructured grids. We match the in silico NS5A diffusion constant such that the NS5A sPDE simulation data reproduce experimental NS5A fluorescence recovery after photobleaching (FRAP) time series data. This parameter estimation yields the NS5A diffusion constant. Such parameters are needed for spatial models of HCV dynamics, which we are developing in parallel but remain qualitative at this stage. Thus, our present study likely provides the first quantitative biophysical description of the movement of a viral component. Our spatio-temporal resolved ansatz paves new ways for understanding intricate spatial-defined processes central to specfic aspects of virus life cycles. PMID:29316722

  14. Structural damage diagnostics via wave propagation-based filtering techniques

    NASA Astrophysics Data System (ADS)

    Ayers, James T., III

    Structural health monitoring (SHM) of aerospace components is a rapidly emerging field due in part to commercial and military transport vehicles remaining in operation beyond their designed life cycles. Damage detection strategies are sought that provide real-time information of the structure's integrity. One approach that has shown promise to accurately identify and quantify structural defects is based on guided ultrasonic wave (GUW) inspections, where low amplitude attenuation properties allow for long range and large specimen evaluation. One drawback to GUWs is that they exhibit a complex multi-modal response, such that each frequency corresponds to at least two excited modes, and thus intelligent signal processing is required for even the simplest of structures. In addition, GUWs are dispersive, whereby the wave velocity is a function of frequency, and the shape of the wave packet changes over the spatial domain, requiring sophisticated detection algorithms. Moreover, existing damage quantification measures are typically formulated as a comparison of the damaged to undamaged response, which has proven to be highly sensitive to changes in environment, and therefore often unreliable. As a response to these challenges inherent to GUW inspections, this research develops techniques to locate and estimate the severity of the damage. Specifically, a phase gradient based localization algorithm is introduced to identify the defect position independent of excitation frequency and damage size. Mode separation through the filtering technique is central in isolating and extracting single mode components, such as reflected, converted, and transmitted modes that may arise from the incident wave impacting a damage. Spatially-integrated single and multiple component mode coefficients are also formulated with the intent to better characterize wave reflections and conversions and to increase the signal to noise ratios. The techniques are applied to damaged isotropic finite element plate models and experimental data obtained from Scanning Laser Doppler Vibrometry tests. Numerical and experimental parametric studies are conducted, and the current strengths and weaknesses of the proposed approaches are discussed. In particular, limitations to the damage profiling characterization are shown for low ultrasonic frequency regimes, whereas the multiple component mode conversion coefficients provide excellent noise mitigation. Multiple component estimation relies on an experimental technique developed for the estimation of Lamb wave polarization using a 1D Laser Vibrometer. Lastly, suggestions are made to apply the techniques to more structurally complex geometries.

  15. Aerosol Direct Radiative Effects and Heating in the New Era of Active Satellite Observations

    NASA Astrophysics Data System (ADS)

    Matus, Alexander V.

    Atmospheric aerosols impact the global energy budget by scattering and absorbing solar radiation. Despite their impacts, aerosols remain a significant source of uncertainty in our ability to predict future climate. Multi-sensor observations from the A-Train satellite constellation provide valuable observational constraints necessary to reduce uncertainties in model simulations of aerosol direct effects. This study will discuss recent efforts to quantify aerosol direct effects globally and regionally using CloudSat's radiative fluxes and heating rates product. Improving upon previous techniques, this approach leverages the capability of CloudSat and CALIPSO to retrieve vertically resolved estimates of cloud and aerosol properties critical for accurately evaluating the radiative impacts of aerosols. We estimate the global annual mean aerosol direct effect to be -1.9 +/- 0.6 W/m2, which is in better agreement with previously published estimates from global models than previous satellite-based estimates. Detailed comparisons against a fully coupled simulation of the Community Earth System Model, however, reveal that this agreement on the global annual mean masks large regional discrepancies between modeled and observed estimates of aerosol direct effects related to model biases in cloud cover. A low bias in stratocumulus cloud cover over the southeastern Pacific Ocean, for example, leads to an overestimate of the radiative effects of marine aerosols. Stratocumulus clouds over the southeastern Atlantic Ocean can enhance aerosol absorption by 50% allowing aerosol layers to remain self-lofted in an area of subsidence. Aerosol heating is found to peak at 0.6 +/- 0.3 K/day an altitude of 4 km in September when biomass burning reaches a maximum. Finally, the contributions of observed aerosols components are evaluated to estimate the direct radiative forcing of anthropogenic aerosols. Aerosol forcing is computed using satellite-based radiative kernels that describe the sensitivity of shortwave fluxes in response to aerosol optical depth. The direct radiative forcing is estimated to be -0.21 W/m2 with the largest contributions from pollution that is partially offset by a positive forcing from smoke aerosols. The results from these analyses provide new benchmarks on the global radiative effects of aerosols and offer new insights for improving future assessments.

  16. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  17. Econometric estimation of country-specific hospital costs.

    PubMed

    Adam, Taghreed; Evans, David B; Murray, Christopher JL

    2003-02-26

    Information on the unit cost of inpatient and outpatient care is an essential element for costing, budgeting and economic-evaluation exercises. Many countries lack reliable estimates, however. WHO has recently undertaken an extensive effort to collect and collate data on the unit cost of hospitals and health centres from as many countries as possible; so far, data have been assembled from 49 countries, for various years during the period 1973-2000. The database covers a total of 2173 country-years of observations. Large gaps remain, however, particularly for developing countries. Although the long-term solution is that all countries perform their own costing studies, the question arises whether it is possible to predict unit costs for different countries in a standardized way for short-term use. The purpose of the work described in this paper, a modelling exercise, was to use the data collected across countries to predict unit costs in countries for which data are not yet available, with the appropriate uncertainty intervals.The model presented here forms part of a series of models used to estimate unit costs for the WHO-CHOICE project. The methods and the results of the model, however, may be used to predict a number of different types of country-specific unit costs, depending on the purpose of the exercise. They may be used, for instance, to estimate the costs per bed-day at different capacity levels; the "hotel" component of cost per bed-day; or unit costs net of particular components such as drugs.In addition to reporting estimates for selected countries, the paper shows that unit costs of hospitals vary within countries, sometimes by an order of magnitude. Basing cost-effectiveness studies or budgeting exercises on the results of a study of a single facility, or even a small group of facilities, is likely to be misleading.

  18. Automatic cardiac cycle determination directly from EEG-fMRI data by multi-scale peak detection method.

    PubMed

    Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy

    2018-03-31

    In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Variability and change of sea level and its components in the Indo-Pacific region during the altimetry era

    NASA Astrophysics Data System (ADS)

    Wu, Quran; Zhang, Xuebin; Church, John A.; Hu, Jianyu

    2017-03-01

    Previous studies have shown that regional sea level exhibits interannual and decadal variations associated with the modes of climate variability. A better understanding of those low-frequency sea level variations benefits the detection and attribution of climate change signals. Nonetheless, the contributions of thermosteric, halosteric, and mass sea level components to sea level variability and trend patterns remain unclear. By focusing on signals associated with dominant climate modes in the Indo-Pacific region, we estimate the interannual and decadal fingerprints and trend of each sea level component utilizing a multivariate linear regression of two adjoint-based ocean reanalyses. Sea level interannual, decadal, and trend patterns primarily come from thermosteric sea level (TSSL). Halosteric sea level (HSSL) is of regional importance in the Pacific Ocean on decadal time scale and dominates sea level trends in the northeast subtropical Pacific. The compensation between TSSL and HSSL is identified in their decadal variability and trends. The interannual and decadal variability of temperature generally peak at subsurface around 100 m but that of salinity tend to be surface-intensified. Decadal temperature and salinity signals extend deeper into the ocean in some regions than their interannual equivalents. Mass sea level (MassSL) is critical for the interannual and decadal variability of sea level over shelf seas. Inconsistencies exist in MassSL trend patterns among various estimates. This study highlights regions where multiple processes work together to control sea level variability and change. Further work is required to better understand the interaction of different processes in those regions.

  20. An approach for characterizing the distribution of shrubland ecosystem components as continuous fields as part of NLCD

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.; Meyer, Debbie; Granneman, Brian J.

    2013-01-01

    Characterizing and quantifying distributions of shrubland ecosystem components is one of the major challenges for monitoring shrubland vegetation cover change across the United States. A new approach has been developed to quantify shrubland components as fractional products within National Land Cover Database (NLCD). This approach uses remote sensing data and regression tree models to estimate the fractional cover of shrubland ecosystem components. The approach consists of three major steps: field data collection, high resolution estimates of shrubland ecosystem components using WorldView-2 imagery, and coarse resolution estimates of these components across larger areas using Landsat imagery. This research seeks to explore this method to quantify shrubland ecosystem components as continuous fields in regions that contain wide-ranging shrubland ecosystems. Fractional cover of four shrubland ecosystem components, including bare ground, herbaceous, litter, and shrub, as well as shrub heights, were delineated in three ecological regions in Arizona, Florida, and Texas. Results show that estimates for most components have relatively small normalized root mean square errors and significant correlations with validation data in both Arizona and Texas. The distribution patterns of shrub height also show relatively high accuracies in these two areas. The fractional cover estimates of shrubland components, except for litter, are not well represented in the Florida site. The research results suggest that this method provides good potential to effectively characterize shrubland ecosystem conditions over perennial shrubland although it is less effective in transitional shrubland. The fractional cover of shrub components as continuous elements could offer valuable information to quantify biomass and help improve thematic land cover classification in arid and semiarid areas.

  1. Estimation of Soil Moisture with L-band Multi-polarization Radar

    NASA Technical Reports Server (NTRS)

    Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.

    2004-01-01

    Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.

  2. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  3. Space Station Furnace Facility. Volume 3: Program cost estimate

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.

  4. Sequential recruitment of study participants may inflate genetic heritability estimates.

    PubMed

    Noce, Damia; Gögele, Martin; Schwienbacher, Christine; Caprioli, Giulia; De Grandi, Alessandro; Foco, Luisa; Platzgummer, Stefan; Pramstaller, Peter P; Pattaro, Cristian

    2017-06-01

    After the success of genome-wide association studies to uncover complex trait loci, attempts to explain the remaining genetic heritability (h 2 ) are mainly focused on unraveling rare variant associations and gene-gene or gene-environment interactions. Little attention is paid to the possibility that h 2 estimates are inflated as a consequence of the epidemiological study design. We studied the time series of 54 biochemical traits in 4373 individuals from the Cooperative Health Research In South Tyrol (CHRIS) study, a pedigree-based study enrolling ten participants/day over several years, with close relatives preferentially invited within the same day. We observed distributional changes of measured traits over time. We hypothesized that the combination of such changes with the pedigree structure might generate a shared-environment component with consequent h 2 inflation. We performed variance components (VC) h 2 estimation for all traits after accounting for the enrollment period in a linear mixed model (two-stage approach). Accounting for the enrollment period caused a median h 2 reduction of 4%. For 9 traits, the reduction was of >20%. Results were confirmed by a Bayesian Markov chain Monte Carlo analysis with all VCs included at the same time (one-stage approach). The electrolytes were the traits most affected by the enrollment period. The h 2 inflation was independent of the h 2 magnitude, laboratory protocol changes, and length of the enrollment period. The enrollment process may induce shared-environment effects even under very stringent and standardized operating procedures, causing h 2 inflation. Including the day of participation as a random effect is a sensitive way to avoid overestimation.

  5. Standing crop and aboveground biomass partitioning of a dwarf mangrove forest in Taylor River Slough, Florida

    USGS Publications Warehouse

    Coronado-Molina, C.; Day, J.W.; Reyes, E.; Perez, B.C.

    2004-01-01

    The structure and standing crop biomass of a dwarf mangrove forest, located in the salinity transition zone ofTaylor River Slough in the Everglades National Park, were studied. Although the four mangrove species reported for Florida occurred at the study site, dwarf Rhizophora mangle trees dominated the forest. The structural characteristics of the mangrove forest were relatively simple: tree height varied from 0.9 to 1.2 meters, and tree density ranged from 7062 to 23 778 stems haa??1. An allometric relationship was developed to estimate leaf, branch, prop root, and total aboveground biomass of dwarf Rhizophora mangle trees. Total aboveground biomass and their components were best estimated as a power function of the crown area times number of prop roots as an independent variable (Y = B ?? Xa??0.5083). The allometric equation for each tree component was highly significant (p<0.0001), with all r2 values greater than 0.90. The allometric relationship was used to estimate total aboveground biomass that ranged from 7.9 to 23.2 ton haa??1. Rhizophora mangle contributed 85% of total standing crop biomass. Conocarpus erectus, Laguncularia racemosa, and Avicennia germinans contributed the remaining biomass. Average aboveground biomass allocation was 69% for prop roots, 25% for stem and branches, and 6% for leaves. This aboveground biomass partitioning pattern, which gives a major role to prop roots that have the potential to produce an extensive root system, may be an important biological strategy in response to low phosphorus availability and relatively reduced soils that characterize mangrove forests in South Florida.

  6. EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology

    NASA Astrophysics Data System (ADS)

    Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt

    2017-04-01

    The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.

  7. Dual-component video image analysis system (VIASCAN) as a predictor of beef carcass red meat yield percentage and for augmenting application of USDA yield grades.

    PubMed

    Cannell, R C; Tatum, J D; Belk, K E; Wise, J W; Clayton, R P; Smith, G C

    1999-11-01

    An improved ability to quantify differences in the fabrication yields of beef carcasses would facilitate the application of value-based marketing. This study was conducted to evaluate the ability of the Dual-Component Australian VIASCAN to 1) predict fabricated beef subprimal yields as a percentage of carcass weight at each of three fat-trim levels and 2) augment USDA yield grading, thereby improving accuracy of grade placement. Steer and heifer carcasses (n = 240) were evaluated using VIASCAN, as well as by USDA expert and online graders, before fabrication of carcasses to each of three fat-trim levels. Expert yield grade (YG), online YG, VIASCAN estimates, and VIASCAN estimated ribeye area used to augment actual and expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, and hot carcass weight), respectively, 1) accounted for 51, 37, 46, and 55% of the variation in fabricated yields of commodity-trimmed subprimals, 2) accounted for 74, 54, 66, and 75% of the variation in fabricated yields of closely trimmed subprimals, and 3) accounted for 74, 54, 71, and 75% of the variation in fabricated yields of very closely trimmed subprimals. The VIASCAN system predicted fabrication yields more accurately than current online yield grading and, when certain VIASCAN-measured traits were combined with some USDA yield grade factors in an augmentation system, the accuracy of cutability prediction was improved, at packing plant line speeds, to a level matching that of expert graders applying grades at a comfortable rate.

  8. Compatible estimators of the components of change for a rotating panel forest inventory design

    Treesearch

    Francis A. Roesch

    2007-01-01

    This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, William Scott

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  10. GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data

    NASA Astrophysics Data System (ADS)

    Ibrahim, Noor Akma; Suliadi

    2010-11-01

    In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.

  11. Quantifying the Complete Mineral Assemblages in Rocks of GUSEV Crater, Mars

    NASA Technical Reports Server (NTRS)

    McSween, H. Y.; Ruff, S. W.; Morris, R. V.; Gellert, R.

    2007-01-01

    Determining the complete mineralogy of Mars rocks by remote sensing has remained a challenge, because of inherent limitations in the minerals that can be detected and uncertainties in spectral modeling. A subset of the igneous rocks of Gusev crater provide a unique opportunity to determine modal mineralogy, because of limited alteration and the analytical capabilities of the Athena instrument package. Here we estimate the absolute (wt. %) abundances of Fe-bearing minerals from Moessbauer spectra (previously reported only as "areas for component subspectra"), and compare these results to the normative mineralogy calculated from APXS elemental analyses. We also test our preferred mineralogy by comparison of Mini-TES spectra with synthetic thermal emission spectra.

  12. ξTauri: a unique laboratory to study the dynamic interaction in a compact hierarchical quadruple system

    NASA Astrophysics Data System (ADS)

    Nemravová, J. A.; Harmanec, P.; Brož, M.; Vokrouhlický, D.; Mourard, D.; Hummel, C. A.; Cameron, C.; Matthews, J. M.; Bolton, C. T.; Božić, H.; Chini, R.; Dembsky, T.; Engle, S.; Farrington, C.; Grunhut, J. H.; Guenther, D. B.; Guinan, E. F.; Korčáková, D.; Koubský, P.; Kříček, R.; Kuschnig, R.; Mayer, P.; McCook, G. P.; Moffat, A. F. J.; Nardetto, N.; Prša, A.; Ribeiro, J.; Rowe, J.; Rucinski, S.; Škoda, P.; Šlechta, M.; Tallon-Bosc, I.; Votruba, V.; Weiss, W. W.; Wolf, M.; Zasche, P.; Zavala, R. T.

    2016-10-01

    Context. Compact hierarchical systems are important because the effects caused by the dynamical interaction among its members occur ona human timescale. These interactions play a role in the formation of close binaries through Kozai cycles with tides. One such system is ξ Tauri: it has three hierarchical orbits: 7.14 d (eclipsing components Aa, Ab), 145 d (components Aa+Ab, B), and 51 yr (components Aa+Ab+B, C). Aims: We aim to obtain physical properties of the system and to study the dynamical interaction between its components. Methods: Our analysis is based on a large series of spectroscopic photometric (including space-borne) observations and long-baseline optical and infrared spectro-interferometric observations. We used two approaches to infer the system properties: a set of observation-specific models, where all components have elliptical trajectories, and an N-body model, which computes the trajectory of each component by integrating Newton's equations of motion. Results: The triple subsystem exhibits clear signs of dynamical interaction. The most pronounced are the advance of the apsidal line and eclipse-timing variations. We determined the geometry of all three orbits using both observation-specific and N-body models. The latter correctly accounted for observed effects of the dynamical interaction, predicted cyclic variations of orbital inclinations, and determined the sense of motion of all orbits. Using perturbation theory, we demonstrate that prominent secular and periodic dynamical effects are explainable with a quadrupole interaction. We constrained the basic properties of all components, especially of members of the inner triple subsystem and detected rapid low-amplitude light variations that we attribute to co-rotating surface structures of component B. We also estimated the radius of component B. Properties of component C remain uncertain because of its low relative luminosity. We provide an independent estimate of the distance to the system. Conclusions: The accuracy and consistency of our results make ξ Tau an excellent test bed for models of formation and evolution of hierarchical systems. Full Tables D.1-D.7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/594/A55Based on data from the MOST satellite, a former Canadian Space Agency mission, jointly operated by Microsatellite Systems Canada Inc. (MSCI; formerly Dynacon Inc.), the University of Toronto Institute for Aerospace Studies and the University of British Columbia, with the assistance of the University of Vienna.

  13. A Principal Component Analysis of the Diffuse Interstellar Bands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ensor, T.; Cami, J.; Bhatt, N. H.

    2017-02-20

    We present a principal component (PC) analysis of 23 line-of-sight parameters (including the strengths of 16 diffuse interstellar bands, DIBs) for a well-chosen sample of single-cloud sightlines representing a broad range of environmental conditions. Our analysis indicates that the majority (∼93%) of the variations in the measurements can be captured by only four parameters The main driver (i.e., the first PC) is the amount of DIB-producing material in the line of sight, a quantity that is extremely well traced by the equivalent width of the λ 5797 DIB. The second PC is the amount of UV radiation, which correlates wellmore » with the λ 5797/ λ 5780 DIB strength ratio. The remaining two PCs are more difficult to interpret, but are likely related to the properties of dust in the line of sight (e.g., the gas-to-dust ratio). With our PCA results, the DIBs can then be used to estimate these line-of-sight parameters.« less

  14. Prevalence of obesity and metabolic syndrome components in Mexican adults without type 2 diabetes or hypertension.

    PubMed

    Rojas-Martínez, Rosalba; Aguilar-Salinas, Carlos A; Jiménez-Corona, Aída; Gómez-Pérez, Francisco J; Barquera, Simón; Lazcano-Ponce, Eduardo

    2012-01-01

    To describe the number of Mexican adults with undiagnosed diabetes and arterial hypertension and their association with obesity. The study included a sub-sample of 6 613 subjects aged 20 years or more who participated in the 2006 National Health and Nutrition Survey (ENSANUT 2006). Subjects with a previous diagnosis of diabetes or hypertension (n=1 861) were excluded. Prevalences and standard errors were estimated, taking into account the complex sample design. 6.4 million adults have obesity and undiagnosed impaired fasting glucose. Almost two million more have fasting glucose levels diagnostic for diabetes. As for arterial blood pressure, 5.4 million adults had prehypertension. Another 5.4 million adults had blood pressure levels suggestive of probable hypertension. A total of 21.4 million Mexican adults with obesity had at least one further component of the metabolic syndrome. A large proportion of adults with obesity-related metabolic comorbidities remains undiagnosed in Mexico.

  15. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  16. Retrieving marine inherent optical properties from satellites using temperature and salinity-dependent backscattering by seawater.

    PubMed

    Werdell, P Jeremy; Franz, Bryan A; Lefler, Jason T; Robinson, Wayne D; Boss, Emmanuel

    2013-12-30

    Time-series of marine inherent optical properties (IOPs) from ocean color satellite instruments provide valuable data records for studying long-term time changes in ocean ecosystems. Semi-analytical algorithms (SAAs) provide a common method for estimating IOPs from radiometric measurements of the marine light field. Most SAAs assign constant spectral values for seawater absorption and backscattering, assume spectral shape functions of the remaining constituent absorption and scattering components (e.g., phytoplankton, non-algal particles, and colored dissolved organic matter), and retrieve the magnitudes of each remaining constituent required to match the spectral distribution of measured radiances. Here, we explore the use of temperature- and salinity-dependent values for seawater backscattering in lieu of the constant spectrum currently employed by most SAAs. Our results suggest that use of temperature- and salinity-dependent seawater spectra elevate the SAA-derived particle backscattering, reduce the non-algal particles plus colored dissolved organic matter absorption, and leave the derived absorption by phytoplankton unchanged.

  17. Retrieving Marine Inherent Optical Properties from Satellites Using Temperature and Salinity-dependent Backscattering by Seawater

    NASA Technical Reports Server (NTRS)

    Werdell, Paul J.; Franz, Bryan Alden; Lefler, Jason Travis; Robinson, Wayne D.; Boss, Emmanuel

    2013-01-01

    Time-series of marine inherent optical properties (IOPs) from ocean color satellite instruments provide valuable data records for studying long-term time changes in ocean ecosystems. Semi-analytical algorithms (SAAs) provide a common method for estimating IOPs from radiometric measurements of the marine light field. Most SAAs assign constant spectral values for seawater absorption and backscattering, assume spectral shape functions of the remaining constituent absorption and scattering components (e.g., phytoplankton, non-algal particles, and colored dissolved organic matter), and retrieve the magnitudes of each remaining constituent required to match the spectral distribution of measured radiances. Here, we explore the use of temperature- and salinity-dependent values for seawater backscattering in lieu of the constant spectrum currently employed by most SAAs. Our results suggest that use of temperature- and salinity-dependent seawater spectra elevate the SAA-derived particle backscattering, reduce the non-algal particles plus colored dissolved organic matter absorption, and leave the derived absorption by phytoplankton unchanged.

  18. a Question of Mass : Accounting for all the Dust in the Crab Nebula with the Deepest Far Infrared Maps

    NASA Astrophysics Data System (ADS)

    Matar, J.; Nehmé, C.; Sauvage, M.

    2017-12-01

    Supernovae represent significant sources of dust in the interstellar medium. In this work, deep far-infrared (FIR) observations of the Crab Nebula are studied to provide a new and reliable constraint on the amount of dust present in this supernova remnant. Deep exposures between 70 and 500 μm taken by PACS and SPIRE instruments on-board the Herschel Space Telescope, compiling all observations of the nebula including PACS observing mode calibration, are refined using advanced processing techniques, thus providing the most accurate data ever generated by Herschel on the object. We carefully find the intrinsic flux of each image by masking the source and creating a 2D polynomial fit to deduce the background emission. After subtracting the estimated non-thermal synchrotron component, two modified blackbodies were found to best fit the remaining infrared continuum, the cold component with T_c = 8.3 ± 3.0 K and M_d = 0.27 ± 0.05 M_{⊙} and the warmer component with T_w = 27.2 ± 1.3 K and M_d = (1.3 ± 0.4) ×10^{-3} M_{⊙}.

  19. Relationship between body mass, lean mass, fat mass, and limb bone cross-sectional geometry: Implications for estimating body mass and physique from the skeleton.

    PubMed

    Pomeroy, Emma; Macintosh, Alison; Wells, Jonathan C K; Cole, Tim J; Stock, Jay T

    2018-05-01

    Estimating body mass from skeletal dimensions is widely practiced, but methods for estimating its components (lean and fat mass) are poorly developed. The ability to estimate these characteristics would offer new insights into the evolution of body composition and its variation relative to past and present health. This study investigates the potential of long bone cross-sectional properties as predictors of body, lean, and fat mass. Humerus, femur and tibia midshaft cross-sectional properties were measured by peripheral quantitative computed tomography in sample of young adult women (n = 105) characterized by a range of activity levels. Body composition was estimated from bioimpedance analysis. Lean mass correlated most strongly with both upper and lower limb bone properties (r values up to 0.74), while fat mass showed weak correlations (r ≤ 0.29). Estimation equations generated from tibial midshaft properties indicated that lean mass could be estimated relatively reliably, with some improvement using logged data and including bone length in the models (minimum standard error of estimate = 8.9%). Body mass prediction was less reliable and fat mass only poorly predicted (standard errors of estimate ≥11.9% and >33%, respectively). Lean mass can be predicted more reliably than body mass from limb bone cross-sectional properties. The results highlight the potential for studying evolutionary trends in lean mass from skeletal remains, and have implications for understanding the relationship between bone morphology and body mass or composition. © 2018 The Authors. American Journal of Physical Anthropology Published by Wiley Periodicals, Inc.

  20. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  1. 36 CFR 223.50 - Periodic payments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Total contract value is the product of the estimated volume of the sale multiplied by the rates bid by..., estimated remaining unscaled volume or, in a tree measurement sale, the estimated remaining quantities by... Section 223.50 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE SALE AND...

  2. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  3. Updated Estimates of the Remaining Market Potential of the U.S. ESCO Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Peter H.; Carvallo Bodelon, Juan Pablo; Goldman, Charles A.

    The energy service company (ESCO) industry has a well-established track record of delivering energy and economic savings in the public and institutional buildings sector, primarily through the use of performance-based contracts. The ESCO industry often provides (or helps arrange) private sector financing to complete public infrastructure projects with little or no up-front cost to taxpayers. In 2014, total U.S. ESCO industry revenue was estimated at $5.3 billion. ESCOs expect total industry revenue to grow to $7.6 billion in 2017—a 13% annual growth rate from 2015-2017. Researchers at Lawrence Berkeley National Laboratory (LBNL) were asked by the U.S. Department of Energymore » Federal Energy Management Program (FEMP) to update and expand our estimates of the remaining market potential of the U.S. ESCO industry. We define remaining market potential as the aggregate amount of project investment by ESCOs that is technically possible based on the types of projects that ESCOS have historically implemented in the institutional, commercial, and industrial sectors using ESCO estimates of current market penetration in those sectors. In this analysis, we report U.S. ESCO industry remaining market potential under two scenarios: (1) a base case and (2) a case “unfettered” by market, bureaucratic, and regulatory barriers. We find that there is significant remaining market potential for the U.S. ESCO industry under both the base and unfettered cases. For the base case, we estimate a remaining market potential of $92-$201 billion ($2016). We estimate a remaining market potential of $190-$333 billion for the unfettered case. It is important to note, however, that there is considerable uncertainty surrounding the estimates for both the base and unfettered cases.« less

  4. H1N1pdm in the Americas

    PubMed Central

    Lessler, Justin; Santos, Thais dos; Aguilera, Ximena; Brookmeyer, Ron; Cummings, Derek AT

    2010-01-01

    In late April 2009 the emergence of 2009 pandemic influenza A (H1N1pdm) virus was detected in humans. From its detection through July 18th, 2009, confirmed cases of H1N1pdm in the Americas were periodically reported to the Pan-American Health Organization (PAHO) by member states. Because the Americas span much of the world’s latitudes, this data provides an excellent opportunity to examine variation in H1N1pdm transmission by season. Using reports from PAHO member states from April 26th, 2009 through July 18th, 2009, we characterize the early spread of the H1N1 pandemic in the Americas. For a geographically representative sample of member states we estimate the reproductive number (R) of H1N1pdm over the reporting period. The association between these estimates and latitude, temperature, humidity and population age structure was estimated. Estimates of the peak reproductive number of H1N1pdm ranged from 1.3 (for Panama, Colombia) to 2.1 (for Chile). We found that reproductive number estimates were most associated with latitude in both univariate and multivariate analyses. To the extent that latitude is a proxy for seasonal changes in climate and behavior, this association suggests a strong seasonal component to H1N1pdm transmission. However, the reasons for this seasonality remain unclear. PMID:20847900

  5. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 3: Temperature uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is typical of the NDACC temperature lidars transmitting at 355 nm. The combined temperature uncertainty ranges between 0.1 and 1 K below 60 km, with detection noise, saturation correction, and molecular extinction correction being the three dominant sources of uncertainty. Above 60 km and up to 10 km below the top of the profile, the total uncertainty increases exponentially from 1 to 10 K due to the combined effect of random noise and temperature tie-on. In the top 10 km of the profile, the accuracy of the profile mainly depends on that of the tie-on temperature. All other uncertainty components remain below 0.1 K throughout the entire profile (15-90 km), except the background noise correction uncertainty, which peaks around 0.3-0.5 K. It should be kept in mind that these quantitative estimates may be very different for other lidar instruments, depending on their altitude range and the wavelengths used.

  6. Least-dependent-component analysis based on mutual information

    NASA Astrophysics Data System (ADS)

    Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter

    2004-12-01

    We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.

  7. Biomass relations for components of five Minnesota shrubs.

    Treesearch

    Richard R. Buech; David J. Rugg

    1995-01-01

    Presents equations for estimating biomass of six components on five species of shrubs common to northeastern Minnesota. Regression analysis is used to compare the performance of three estimators of biomass.

  8. Hydrologic budgets for the Madison and Minnelusa aquifers, Black Hills of South Dakota and Wyoming, water years 1987-96

    USGS Publications Warehouse

    Carter, Janet M.; Driscoll, Daniel G.; Hamade, Ghaith R.; Jarrell, Gregory J.

    2001-01-01

    The Madison and Minnelusa aquifers are two of the most important aquifers in the Black Hills area of South Dakota and Wyoming. Quantification and evaluation of various hydrologic budget components are important for managing and understanding these aquifers. Hydrologic budgets are developed for two scenarios, including an overall budget for the entire study area and more detailed budgets for subareas. Budgets generally are combined for the Madison and Minnelusa aquifers because most budget components cannot be quantified individually for the aquifers. An average hydrologic budget for the entire study area is computed for water years 1987-96, for which change in storage is approximately equal to zero. Annual estimates of budget components are included in detailed budgets for nine subareas, which consider periods of decreasing storage (1987-92) and increasing storage (1993-96). Inflow components include recharge, leakage from adjacent aquifers, and ground-water inflows across the study area boundary. Outflows include springflow (headwater and artesian), well withdrawals, leakage to adjacent aquifers, and ground-water outflow across the study area boundary. Leakage, ground-water inflows, and ground-water outflows are difficult to quantify and cannot be distinguished from one another. Thus, net ground-water flow, which includes these components, is calculated as a residual, using estimates for the other budget components. For the overall budget for water years 1987-96, net ground-water outflow from the study area is computed as 100 ft3/s (cubic feet per second). Estimates of average combined budget components for the Madison and Minnelusa aquifers are: 395 ft3/s for recharge, 78 ft3/s for headwater springflow, 189 ft3/s for artesian springflow, and 28 ft3/s for well withdrawals. Hydrologic budgets also are quantified for nine subareas for periods of decreasing storage (1987-92) and increasing storage (1993-96), with changes in storage assumed equal but opposite. Common subareas are identified for the Madison and Minnelusa aquifers, and previous components from the overall budget generally are distributed over the subareas. Estimates of net ground-water flow for the two aquifers are computed, with net ground-water outflow exceeding inflow for most subareas. Outflows range from 5.9 ft3/s in the area east of Rapid City to 48.6 ft3/s along the southwestern flanks of the Black Hills. Net groundwater inflow exceeds outflow for two subareas where the discharge of large artesian springs exceeds estimated recharge within the subareas. More detailed subarea budgets also are developed, which include estimates of flow components for the individual aquifers at specific flow zones. The net outflows and inflows from the preliminary subarea budgets are used to estimate transmissivity of flow across specific flow zones based on Darcy?s Law. For estimation purposes, it is assumed that transmissivities of the Madison and Minnelusa aquifers are equal in any particular flow zone. The resulting transmissivity estimates range from 90 ft2/d to about 7,400 ft2/d, which is similar to values reported by previous investigators. The highest transmissivity estimates are for areas in the northern and southwestern parts of the study area, and the lowest transmissivity estimates are along the eastern study area boundary. Evaluation of subarea budgets provides confidence in budget components developed for the overall budget, especially regarding precipitation recharge, which is particularly difficult to estimate. Recharge estimates are consistently compatible with other budget components, including artesian springflow, which is a dominant component in many subareas. Calculated storage changes for subareas also are consistent with other budget components, specifically artesian springflow and net ground-water flow, and also are consistent with water-level fluctuations for observation wells. Ground-water budgets and flowpaths are especially complex i

  9. Direct process estimation from tomographic data using artificial neural systems

    NASA Astrophysics Data System (ADS)

    Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.

    2001-07-01

    The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.

  10. Total Motion Across the East African Rift Viewed From the Southwest Indian Ridge

    NASA Astrophysics Data System (ADS)

    Royer, J.; Gordon, R. G.

    2005-05-01

    The Nubian plate is known to have been separating from the Somalian plate along the East African Rift since Oligocene time. Recent works have shown that the spreading rates and spreading directions since 11 Ma along the Southwest Indian Ridge (SWIR) record Nubia-Antarctica motion west of the Andrew Bain Fracture Zone complex (ABFZ; between 25E and 35E) and Somalia-Antarctica motion east of it. Nubia-Somalia motion can be determined by differencing Nubia-Antarctica and Somalia-Antarctica motion. To estimate the total motion across the East African Rift, we estimated and differenced Nubia-Antarctica motion and Somalia-Antarctica motion for times that preceded the initiation of Nubia-Somalia motion. We analyze anomalies 24n.3o (53 Ma), 21o (48 Ma), 18o (40 Ma) and 13o (34 Ma). Preliminary results show that the poles of the finite rotations that describe the Nubia-Somalia motions cluster near 30E, 42S. Angles of rotation range from 2.7 to 4.0 degrees. The uncertainty regions are large. The lower estimate predicts a total extension of 245 km at the latitude of the Ethiopian rift (41E, 9N) in a direction N104, perpendicular to the mean trend of the rift. Assuming an age of 34 Ma for the initiation of rifting, the average rate of motion would be 7 mm/a, near the 9 mm/a deduced from present-day geodetic measurements [e.g. synthesis of Fernandes et al., 2004]. Although these results require further analysis, particularly on the causes of the large uncertainties, they represent the first independent estimate of the total extension across the rift. Among other remaining questions are the following: How significant are the differences between these estimates and those for younger chrons (5 or 6 ; respectively 11 and 20 Ma), i.e. is the start of extension datable? Is the region east of the ABFZ part of the Somalian plate or does it form a distinct component plate of Somalia, as postulated by Hartnady (2004)? How has motion between two or more component plates within the African composite plate affected estimates of India-Eurasia motion and of Pacific-North America motion?

  11. Panel data models with spatial correlation: Estimation theory and an empirical investigation of the United States wholesale gasoline industry

    NASA Astrophysics Data System (ADS)

    Kapoor, Mudit

    The first part of my dissertation considers the estimation of a panel data model with error components that are both spatially and time-wise correlated. The dissertation combines widely used model for spatial correlation (Cliff and Ord (1973, 1981)) with the classical error component panel data model. I introduce generalizations of the generalized moments (GM) procedure suggested in Kelejian and Prucha (1999) for estimating the spatial autoregressive parameter in case of a single cross section. I then use those estimators to define feasible generalized least squares (GLS) procedures for the regression parameters. I give formal large sample results concerning the consistency of the proposed GM procedures, as well as the consistency and asymptotic normality of the proposed feasible GLS procedures. The new estimators remain computationally feasible even in large samples. The second part of my dissertation employs a Cliff-Ord-type model to empirically estimate the nature and extent of price competition in the US wholesale gasoline industry. I use data on average weekly wholesale gasoline price for 289 terminals (distribution facilities) in the US. Data on demand factors, cost factors and market structure that affect price are also used. I consider two time periods, a high demand period (August 1999) and a low demand period (January 2000). I find a high level of competition in prices between neighboring terminals. In particular, price in one terminal is significantly and positively correlated to the price of its neighboring terminal. Moreover, I find this to be much higher during the low demand period, as compared to the high demand period. In contrast to previous work, I include for each terminal the characteristics of the marginal customer by controlling for demand factors in the neighboring location. I find these demand factors to be important during period of high demand and insignificant during the low demand period. Furthermore, I have also considered spatial correlation in unobserved factors that affect price. I find it to be high and significant only during the low demand period. Not correcting for it leads to incorrect inferences regarding exogenous explanatory variables.

  12. Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.

  13. Temperature acclimation of photosynthesis and respiration: A key uncertainty in the carbon cycle-climate feedback

    NASA Astrophysics Data System (ADS)

    Lombardozzi, Danica L.; Bonan, Gordon B.; Smith, Nicholas G.; Dukes, Jeffrey S.; Fisher, Rosie A.

    2015-10-01

    Earth System Models typically use static responses to temperature to calculate photosynthesis and respiration, but experimental evidence suggests that many plants acclimate to prevailing temperatures. We incorporated representations of photosynthetic and leaf respiratory temperature acclimation into the Community Land Model, the terrestrial component of the Community Earth System Model. These processes increased terrestrial carbon pools by 20 Pg C (22%) at the end of the 21st century under a business-as-usual (Representative Concentration Pathway 8.5) climate scenario. Including the less certain estimates of stem and root respiration acclimation increased terrestrial carbon pools by an additional 17 Pg C (~40% overall increase). High latitudes gained the most carbon with acclimation, and tropical carbon pools increased least. However, results from both of these regions remain uncertain; few relevant data exist for tropical and boreal plants or for extreme temperatures. Constraining these uncertainties will produce more realistic estimates of land carbon feedbacks throughout the 21st century.

  14. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  15. Gas-rich galaxy pair unveiled in the lensed quasar 0957+561

    PubMed

    Planesas; Martin-Pintado; Neri; Colina

    1999-12-24

    Molecular gas in the host galaxy of the lensed quasar 0957+561 (QSO 0957+561) at the redshift of 1.41 has been detected in the carbon monoxide (CO) line. This detection shows the extended nature of the molecular gas distribution in the host galaxy and the pronounced lensing effects due to the differentially magnified CO luminosity at different velocities. The estimated mass of molecular gas is about 4 x 10(9) solar masses, a molecular gas mass typical of a spiral galaxy like the Milky Way. A second, weaker component of CO is interpreted as arising from a close companion galaxy that is rich in molecular gas and has remained undetected so far. Its estimated molecular gas mass is 1.4 x 10(9) solar masses, and its velocity relative to the main galaxy is 660 kilometers per second. The ability to probe the molecular gas distribution and kinematics of galaxies associated with high-redshift lensed quasars can be used to improve the determination of the Hubble constant H(0).

  16. A collinearity diagnosis of the GNSS geocenter determination

    NASA Astrophysics Data System (ADS)

    Rebischung, Paul; Altamimi, Zuheir; Springer, Tim

    2014-01-01

    The problem of observing geocenter motion from global navigation satellite system (GNSS) solutions through the network shift approach is addressed from the perspective of collinearity (or multicollinearity) among the parameters of a least-squares regression. A collinearity diagnosis, based on the notion of variance inflation factor, is therefore developed and allows handling several peculiarities of the GNSS geocenter determination problem. Its application reveals that the determination of all three components of geocenter motion with GNSS suffers from serious collinearity issues, with a comparable level as in the problem of determining the terrestrial scale simultaneously with the GNSS satellite phase center offsets. The inability of current GNSS, as opposed to satellite laser ranging, to properly sense geocenter motion is mostly explained by the estimation, in the GNSS case, of epoch-wise station and satellite clock offsets simultaneously with tropospheric parameters. The empirical satellite accelerations, as estimated by most Analysis Centers of the International GNSS Service, slightly amplify the collinearity of the geocenter coordinate, but their role remains secondary.

  17. Texas hospitals with higher health information technology expenditures have higher revenue: A longitudinal data analysis using a generalized estimating equation model.

    PubMed

    Lee, Jinhyung; Choi, Jae-Young

    2016-04-05

    The benefits of health information technology (IT) adoption have been reported in the literature, but whether health IT investment increases revenue generation remains an important research question. Texas hospital data obtained from the American Hospital Association (AHA) for 2007-2010 were used to investigate the association of health IT expenses and hospital revenue. The generalized estimation equation (GEE) with an independent error component was used to model the data controlling for cluster error within hospitals. We found that health IT expenses were significantly and positively associated with hospital revenue. Our model predicted that a 100% increase in health IT expenditure would result in an 8% increase in total revenue. The effect of health IT was more associated with gross outpatient revenue than gross inpatient revenue. Increased health IT expenses were associated with greater hospital revenue. Future research needs to confirm our findings with a national sample of hospitals.

  18. An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures

    NASA Technical Reports Server (NTRS)

    Sun, Joy Z.; Josh, Suresh M.

    2009-01-01

    The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.

  19. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.

  20. WNDCOM: estimating surface winds in mountainous terrain

    Treesearch

    Bill C. Ryan

    1983-01-01

    WNDCOM is a mathematical model for estimating surface winds in mountainous terrain. By following the procedures described, the sheltering and diverting effect of terrain, the individual components of the windflow, and the surface wind in remote mountainous areas can be estimated. Components include the contribution from the synoptic scale pressure gradient, the sea...

  1. Implications of allometric model selection for county-level biomass mapping.

    PubMed

    Duncanson, Laura; Huang, Wenli; Johnson, Kristofer; Swatantran, Anu; McRoberts, Ronald E; Dubayah, Ralph

    2017-10-18

    Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1): 12-35, 2003), Chojnacky et al. (Forestry 87(1): 129-151, 2014) and the US Forest Service's Component Ratio Method (CRM). We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25-45 m). The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m), while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more appropriate for biomass mapping with lidar. These results confirm that allometric model selection considerably impacts biomass maps and estimates, and that allometric model errors remain poorly understood. Our findings that allometric model discrepancies are not explained by lidar heights suggests that allometric model form does not drive these discrepancies. A better understanding of the sources of allometric model errors, particularly in high biomass systems, is essential for improved forest biomass mapping.

  2. [Assessment of nutritional quality in healthy pregnant women of the Canary Islands, Spain].

    PubMed

    Ortiz-Andrellucchi, Adriana; Sánchez-Villegas, Almudena; Ramírez-García, Octavio; Serra-Majem, Lluís

    2009-10-31

    To describe the composition of the diet of healthy pregnant women of the Canary Islands and to estimate the nutritional quality using the Healthy Eating Index (HEI). Cross-sectional study based on 103 women aged 18-40 years, who gave birth at the University Hospital Materno-Infantil of Gran Canaria. Food consumption and macro and micronutrient intake were estimated using a food frequency questionnaire used in the Canary Island Nutrition Survey (ENCA) and the HEI was calculated. This index includes 10 components and the maximum possible score of the index is 100 points. The score of the index was 54.9. This result remains below the optimum score of > or =80, which is considered a diet of good quality of pregnant women in our study population. The average score of the first 5 components of the index showed that cereal consumption was below the daily portions recommended for pregnant women, whereas vegetables, fruit, milk and meat consumption surpassed the recommendations. A significant number of pregnant women did not reach the 50% of the recommendations for iron, folate and vitamin D intake (36.9, 26.2 and 38.8% respectively). At least 30% of the population exceeded 200% of the recommendations for proteins, thiamine, niacin, riboflavin, vitamin C and vitamin A. Dietary advice for improving the diet quality during pregnancy and the supplementation of mainly iron and folate are necessary.

  3. Oscillations in land surface hydrological cycle

    NASA Astrophysics Data System (ADS)

    Labat, D.

    2006-02-01

    Hydrological cycle is the perpetual movement of water throughout the various component of the global Earth's system. Focusing on the land surface component of this cycle, the determination of the succession of dry and humid periods is of high importance with respect to water resources management but also with respect to global geochemical cycles. This knowledge requires a specified estimation of recent fluctuations of the land surface cycle at continental and global scales. Our approach leans towards a new estimation of freshwater discharge to oceans from 1875 to 1994 as recently proposed by Labat et al. [Labat, D., Goddéris, Y., Probst, JL, Guyot, JL, 2004. Evidence for global runoff increase related to climate warming. Advances in Water Resources, 631-642]. Wavelet analyses of the annual freshwater discharge time series reveal an intermittent multiannual variability (4- to 8-y, 14- to 16-y and 20- to 25-y fluctuations) and a persistent multidecadal 30- to 40-y variability. Continent by continent, reasonable relationships between land-water cycle oscillations and climate forcing (such as ENSO, NAO or sea surface temperature) are proposed even though if such relationships or correlations remain very complex. The high intermittency of interannual oscillations and the existence of persistent multidecadal fluctuations make prediction difficult for medium-term variability of droughts and high-flows, but lead to a more optimistic diagnostic for long-term fluctuations prediction.

  4. Water balance at a low-level radioactive-waste disposal site

    USGS Publications Warehouse

    Healy, R.W.; Gray, J.R.; De Vries, G. M.; Mills, P.C.

    1989-01-01

    The water balance at a low-level radioactive-waste disposal site in northwestern Illinois was studied from July 1982 through June 1984. Continuous data collection allowed estimates to be made for each component of the water-balance equation independent of other components. The average annual precipitation was 948 millimeters. Average annual evapotranspiration was estimated at 637 millimeters, runoff was 160 millimeters, change in water storage in a waste-trench cover was 24 millimeters, and deep percolation was 208 millimeters. The magnitude of the difference between precipitation and all other components (81 millimeters per year) indicates that, in a similar environment, the water-budget method would be useful in estimating evapotranspiration, but questionable for estimation of other components. Precipitation depth and temporal distribution had a very strong effect on all other components of the water-balance equation. Due to the variability of precipitation from year to year, it appears that two years of data are inadequate for characterization of the long-term average water balance at the site.

  5. Floodplain ecosystem processes

    NASA Astrophysics Data System (ADS)

    Melack, John M.; Novo, Evlyn M. L. M.; Forsberg, Bruce R.; Piedade, Maria T. F.; Maurice, Laurence

    Floodplains represent a major component of the central Amazon Basin and influence the hydrology, ecology, and biogeochemistry. Hess et al. (2003) used a classification of synthetic aperture radar data with 100 m resolution for a 1.77 million km2 quadrat in central Amazonia and identified 17% as wetland most of which was inundated a portion of each year. Total net production attributed to flooded forests (excluding wood increments), aquatic macrophytes, phytoplankton, and periphyton for the 1.77 million km2 quadrat was estimated to be about 300 Tg C a-1. Flooded forests accounted for 62% of the total, aquatic macrophytes accounted for 34%, and the remaining 4% was associated with periphyton and phytoplankton. Approximately 10% of the total is the amount of organic carbon exported annually by the Amazon River according to Richey et al. (1990), methane emission is about 2.5% according to Melack et al. (2004), and a similar percent is estimated to be buried in sediments. The remaining portion is close to being sufficient to fuel the respiration that results in the degassing of 210 ± 60 Tg C a-1 as carbon dioxide from the rivers and floodplains according to Richey et al. (2002). Variations in the distribution and inundation of floodplain habitats play a key role in the ecology and production of many commercially important freshwater fish. A significant relationship exists between maximum inundated area lagged by 5 years and annual yield of omnivores.

  6. Global Drought Monitoring and Forecasting based on Satellite Data and Land Surface Modeling

    NASA Astrophysics Data System (ADS)

    Sheffield, J.; Lobell, D. B.; Wood, E. F.

    2010-12-01

    Monitoring drought globally is challenging because of the lack of dense in-situ hydrologic data in many regions. In particular, soil moisture measurements are absent in many regions and in real time. This is especially problematic for developing regions such as Africa where water information is arguably most needed, but virtually non-existent on the ground. With the emergence of remote sensing estimates of all components of the water cycle there is now the potential to monitor the full terrestrial water cycle from space to give global coverage and provide the basis for drought monitoring. These estimates include microwave-infrared merged precipitation retrievals, evapotranspiration based on satellite radiation, temperature and vegetation data, gravity recovery measurements of changes in water storage, microwave based retrievals of soil moisture and altimetry based estimates of lake levels and river flows. However, many challenges remain in using these data, especially due to biases in individual satellite retrieved components, their incomplete sampling in time and space, and their failure to provide budget closure in concert. A potential way forward is to use modeling to provide a framework to merge these disparate sources of information to give physically consistent and spatially and temporally continuous estimates of the water cycle and drought. Here we present results from our experimental global water cycle monitor and its African drought monitor counterpart (http://hydrology.princeton.edu/monitor). The system relies heavily on satellite data to drive the Variable Infiltration Capacity (VIC) land surface model to provide near real-time estimates of precipitation, evapotranspiraiton, soil moisture, snow pack and streamflow. Drought is defined in terms of anomalies of soil moisture and other hydrologic variables relative to a long-term (1950-2000) climatology. We present some examples of recent droughts and how they are identified by the system, including objective quantification and tracking of their spatial-temporal characteristics. Further we present strategies for merging various sources of information, including bias correction of satellite precipitation and assimilation of remotely sensed soil moisture, which can augment the monitoring in regions where satellite precipitation is most uncertain. Ongoing work is adding a drought forecast component based on a successful implementation over the U.S. and agricultural productivity estimates based on output from crop yield models. The forecast component uses seasonal global climate forecasts from the NCEP Climate Forecast System (CFS). These are merged with observed climatology in a Bayesian framework to produce ensemble atmospheric forcings that better capture the uncertainties. At the same time, the system bias corrects and downscales the monthly CFS data. We show some initial seasonal (up to 6-month lead) hydrologic forecast results for the African system. Agricultural monitoring is based on the precipitation, temperature and soil moisture from the system to force statistical and process based crop yield models. We demonstrate the feasibility of monitoring major crop types across the world and show a strategy for providing predictions of yields within our drought forecast mode.

  7. Evaluation of concentrations of pharmaceuticals detected in sewage influents in Japan by using annual shipping and sales data.

    PubMed

    Azuma, Takashi; Nakada, Norihide; Yamashita, Naoyuki; Tanaka, Hiroaki

    2015-11-01

    A year-round monitoring survey of sewage flowing into sewage treatment plants located in urban Japan was conducted by targeting seven representative pharmaceutical components-atenolol (ATL), ciprofloxacin (CFX), clarithromycin (CTM), diclofenac (DCF), diltiazem (DTZ), disopyramide (DSP), and sulpiride (SPR)-detected in the river environment. For each of these components, two types of predicted concentration were estimated on the basis of two types of data (the shipping volume and sales volume of each component). The measured concentration of each component obtained through the survey and the two types of estimated predicted concentration of each component were then compared. The correspondence ratio between the predicted concentration estimated from the shipping volume of the component and the measured concentration (predicted concentration/measured concentration) was, for ATL, 3.1; CFX, 1.4; CTM, 1.4; DCF, 0.2; DTZ, 0.9; DSP, 11.6; and SPR, 1.1. The correspondence ratio between the predicted concentration estimated from the sales volume of the component and the measured concentration was, for ATL, 0.5; CFX, 1.1; CTM, 0.8; DCF, 0.1; DTZ, 0.2; DSP, 0.7; and SPR, 0.8. Although a generally corresponding trend was seen regardless of whether the prediction was based on shipping volume or sales volume, the predicted concentrations estimated from the shipping volumes of all components expect DSP were found, to our knowledge for the first time in Japan, to correspond better than those based on sales volumes to the measured concentrations. These findings should help to improve the prediction accuracy of concentrations of pharmaceutical components in river waters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  9. Unmasking the component-general and component-specific aspects of primary and secondary memory in the immediate free recall task.

    PubMed

    Gibson, Bradley S; Gondoli, Dawn M

    2018-04-01

    The immediate free recall (IFR) task has been commonly used to estimate the capacities of the primary memory (PM) and secondary memory (SM) components of working memory (WM). Using this method, the correlation between estimates of the PM and SM components has hovered around zero, suggesting that PM and SM represent fully distinct and dissociable components of WM. However, this conclusion has conflicted with more recent studies that have observed moderately strong, positive correlations between PM and SM when separate attention and retrieval tasks are used to estimate these capacities, suggesting that PM and SM represent at least some related capacities. The present study attempted to resolve this empirical discrepancy by investigating the extent to which the relation between estimates of PM and SM might be suppressed by a third variable that operates during the recall portion of the IFR task. This third variable was termed "strength of recency" (SOR) in the present study as it reflected differences in the extent to which individuals used the same experimentally-induced recency recall initiation strategy. As predicted, the present findings showed that the positive correlation between estimates of PM and SM grew from small to medium when the indirect effect of SOR was controlled across two separate sets of studies. This finding is important because it provides stronger support for the distinction between "component-general" and "component-specific" aspects of PM and SM; furthermore, a proof is presented that demonstrates a limitation of using regression techniques to differentiate general and specific aspects of these components.

  10. Large Scale Evapotranspiration Estimates: An Important Component in Regional Water Balances to Assess Water Availability

    NASA Astrophysics Data System (ADS)

    Garatuza-Payan, J.; Yepez, E. A.; Watts, C.; Rodriguez, J. C.; Valdez-Torres, L. C.; Robles-Morua, A.

    2013-05-01

    Water security, can be defined as the reliable supply in quantity and quality of water to help sustain future populations and maintaining ecosystem health and productivity. Water security is rapidly declining in many parts of the world due to population growth, drought, climate change, salinity, pollution, land use change, over-allocation and over-utilization, among other issues. Governmental offices (such as the Comision Nacional del Agua in Mexico, CONAGUA) require and conduct studies to estimate reliable water balances at regional or continental scales in order to provide reasonable assessments of the amount of water that can be provided (from surface or ground water sources) to supply all the human needs while maintaining natural vegetation, on an operational basis and, more important, under disturbances, such as droughts. Large scale estimates of evapotranspiration (ET), a critical component of the water cycle, are needed for a better comprehension of the hydrological cycle at large scales, which, in most water balances is left as the residual. For operational purposes, such water balance estimates can not rely on ET measurements since they do not exist, should be simple and require the least ground information possible, information that is often scarce or does not exist at all. Given this limitation, the use of remotely sensed data to estimate ET could supplement the lack of ground information, particularly in remote regions In this study, a simple method, based on the Makkink equation is used to estimate ET for large areas at high spatial resolutions (1 km). The Makkink model used here is forced using three remotely sensed datasets. First, the model uses solar radiation estimates obtained from the Geostationary Operational Environmental Satellite (GOES); Second, the model uses an Enhanced Vegetation Index (EVI) obtained from the Moderate-resolution Imaging Spectroradiometer (MODIS) normalized to get an estimate for vegetation amount and land use which was used in a "kind of" crop factor manner for all vegetation types (including agricultural fields). Finally, the model uses air temperature and humidity, both extracted from the North American Land Data Assimilation System (NLDAS) database. ET estimates were then compared to ground truth data from four sites where long-term Eddy Covariance (EC) measurements of ET were conducted. This approach was developed and applied in Northern Mexico. Emphasis was placed on trying to minimize the large uncertainties that still remained on the temporal evolution and the spatial repartition of ET. Results show good agreement with ground data (with r2 greater than 0.7 on daily ET estimates) from the four sites evaluated using different vegetation types hence reducing the spatial uncertainties. Estimates of total annual ET were used in a water balance, assessing ground water availability for eleven aquifers in the state of Chihuahua. Annual ET in a four-year analysis period, ranged from 200 to 280 mm/year, representing 63 to 83 % of total annual precipitation, which reflects the importance of this component in the water balance. A GIS tool kit is under development to support decision makers at CONAGUA.

  11. Additivity in tree biomass components of Pyrenean oak (Quercus pyrenaica Willd.)

    Treesearch

    Joao P. Carvalho; Bernard R. Parresol

    2003-01-01

    In tree biomass estimations, it is important to consider the property of additivity, i.e., the total tree biomass should equal the sum of the components. This work presents functions that allow estimation of the stem and crown dry weight components of Pyrenean oak (Quercus pyrenaica Willd.) trees. A procedure that considers additivity of tree biomass...

  12. Valuing Groundwater Resources in Arid Watersheds under Climate Change: A Framework and Estimates for the Upper Rio Grande

    NASA Astrophysics Data System (ADS)

    Hurd, B. H.; Coonrod, J.

    2008-12-01

    Climate change is expected to alter surface hydrology throughout the arid Western United States, in most cases compressing the period of peak snowmelt and runoff, and in some cases, for example, the Rio Grande, limiting total runoff. As such, climate change is widely expected to further stress arid watersheds, particularly in regions where trends in population growth, economic development and environmental regulation are current challenges. Strategies to adapt to such changes are evolving at various institutional levels including conjunctive management of surface and ground waters. Groundwater resources remain one of the key components of water management strategies aimed at accommodating continued population growth and mitigating the potential for water supply disruptions under climate change. By developing a framework for valuing these resources and for value improvements in the information pertaining to their characteristics, this research can assist in prioritizing infrastructure and investment to change and enhance water resource management. The key objective of this paper is to 1) develop a framework for estimating the value of groundwater resources and improved information, and 2) provide some preliminary estimates of this value and how it responds to plausible scenarios of climate change.

  13. Estimates of Dietary Sodium Consumption in Patients With Chronic Heart Failure.

    PubMed

    Colin-Ramirez, Eloisa; Arcand, JoAnne; Ezekowitz, Justin A

    2015-12-01

    Estimating dietary sodium intake is a key component of dietary assessment in the clinical setting of HF to effectively implement appropriate dietary interventions for sodium reduction and monitor adherence to the dietary treatment. In a research setting, assessment of sodium intake is crucial to an essential methodology to evaluate outcomes after a dietary or behavioral intervention. Current available sodium intake assessment methods include 24-hour urine collection, spot urine collections, multiple day food records, food recalls, and food frequency questionnaires. However, these methods have inherent limitations that make assessment of sodium intake challenging, and the utility of traditional methods may be questionable for estimating sodium intake in patients with HF. Thus, there are remaining questions about how to best assess dietary sodium intake in this patient population, and there is a need to identify a reliable method to assess and monitor sodium intake in the research and clinical setting of HF. This paper provides a comprehensive review of the current methods for sodium intake assessment, addresses the challenges for its accurate evaluation, and highlights the relevance of applying the highest-quality measurement methods in the research setting to minimize the risk of biased data. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  15. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  16. A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.

    PubMed

    Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping

    2017-01-30

    Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  18. Apparatus and process for the refrigeration, liquefaction and separation of gases with varying levels of purity

    DOEpatents

    Bingham, Dennis N.; Wilding, Bruce M.; McKellar, Michael G.

    2002-01-01

    A process for the separation and liquefaction of component gasses from a pressurized mix gas stream is disclosed. The process involves cooling the pressurized mixed gas stream in a heat exchanger so as to condensing one or more of the gas components having the highest condensation point; separating the condensed components from the remaining mixed gas stream in a gas-liquid separator; cooling the separated condensed component stream by passing it through an expander; and passing the cooled component stream back through the heat exchanger such that the cooled component stream functions as the refrigerant for the heat exchanger. The cycle is then repeated for the remaining mixed gas stream so as to draw off the next component gas and further cool the remaining mixed gas stream. The process continues until all of the component gases are separated from the desired gas stream. The final gas stream is then passed through a final heat exchanger and expander. The expander decreases the pressure on the gas stream, thereby cooling the stream and causing a portion of the gas stream to liquify within a tank. The portion of the gas which is hot liquefied is passed back through each of the heat exchanges where it functions as a refrigerant.

  19. Apparatus and process for the refrigeration, liquefaction and separation of gases with varying levels of purity

    DOEpatents

    Bingham, Dennis N.; Wilding, Bruce M.; McKellar, Michael G.

    2000-01-01

    A process for the separation and liquefaction of component gasses from a pressurized mix gas stream is disclosed. The process involves cooling the pressurized mixed gas stream in a heat exchanger so as to condense one or more of the gas components having the highest condensation point; separating the condensed components from the remaining mixed gas stream in a gas-liquid separator; cooling the separated condensed component stream by passing it through an expander; and passing the cooled component stream back through the heat exchanger such that the cooled component stream functions as the refrigerant for the heat exchanger. The cycle is then repeated for the remaining mixed gas stream so as to draw off the next component gas and further cool the remaining mixed gas stream. The process continues until all of the component gases are separated from the desired gas stream. The final gas stream is then passed through a final heat exchanger and expander. The expander decreases the pressure on the gas stream, thereby cooling the stream and causing a portion of the gas stream to liquify within a tank. The portion of the gas which is not liquefied is passed back through each of the heat exchanges where it functions as a refrigerant.

  20. Trends in the prevalence of metabolic syndrome and its components in South Korea: Findings from the Korean National Health Insurance Service Database (2009–2013)

    PubMed Central

    Lee, Seung Eun; Han, Kyungdo; Kang, Yu Mi; Kim, Seon-Ok; Cho, Yun Kyung; Ko, Kyung Soo; Park, Joong-Yeol; Lee, Ki-Up

    2018-01-01

    Background The prevalence of metabolic syndrome has markedly increased worldwide. However, studies in the United States show that it has remained stable or slightly declined in recent years. Whether this applies to other countries is presently unclear. Objectives We examined the trends in the prevalence of metabolic syndrome and its components in Korea. Methods The prevalence of metabolic syndrome and its components was estimated in adults aged >30 years from the Korean National Health Insurance Service data from 2009 to 2013. The revised National Cholesterol Education Program criteria were used to define metabolic syndrome. Results Approximately 10 million individuals were analyzed annually. The age-adjusted prevalence of metabolic syndrome increased from 28.84% to 30.52%, and the increasing trend was more prominent in men. Prevalence of hypertriglyceridemia, low HDL-cholesterol, and impaired fasting plasma glucose significantly increased. However, the prevalence of hypertension decreased in both genders. The prevalence of abdominal obesity decreased in women over 50 years-of-age but significantly increased in young women and men (<50 years). Conclusions The prevalence of metabolic syndrome is still increasing in Korea. Trends in each component of metabolic syndrome are disparate according to the gender, or age groups. Notably, abdominal obesity among young adults increased significantly; thus, interventional strategies should be implemented particularly for this age group. PMID:29566051

  1. Estimation of daily stream flow of southeastern coastal plain watersheds by combining estimated magnitude and sequence

    Treesearch

    Herbert Ssegane; Devendra M. Amatya; E.W. Tollner; Zhaohua Dai; Jami E. Nettles

    2013-01-01

    Commonly used methods to predict streamflow at ungauged watersheds implicitly predict streamflow magnitude and temporal sequence concurrently. An alternative approach that has not been fully explored is the conceptualization of streamflow as a composite of two separable components of magnitude and sequence, where each component is estimated separately and then combined...

  2. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  3. Comparing different methods for determining forest evapotranspiration and its components at multiple temporal scales.

    PubMed

    Tie, Qiang; Hu, Hongchang; Tian, Fuqiang; Holbrook, N Michele

    2018-08-15

    Accurately estimating forest evapotranspiration and its components is of great importance for hydrology, ecology, and meteorology. In this study, a comparison of methods for determining forest evapotranspiration and its components at annual, monthly, daily, and diurnal scales was conducted based on in situ measurements in the subhumid mountainous forest of North China. The goal of the study was to evaluate the accuracies and reliabilities of the different methods. The results indicate the following: (1) The sap flow upscaling procedure, taking into account diversities in forest types and tree species, produced component-based forest evapotranspiration estimate that agreed with eddy covariance-based estimate at the temporal scales of year, month, and day, while soil water budget-based forest evapotranspiration estimate was also qualitatively consistent with eddy covariance-based estimate at the daily scale; (2) At the annual scale, catchment water balance-based forest evapotranspiration estimate was significantly higher than eddy covariance-based estimate, which might probably result from non-negligible subsurface runoff caused by the widely distributed regolith and fractured bedrock under the ground; (3) At the sub-daily scale, the diurnal course of sap flow based-canopy transpiration estimate lagged significantly behind eddy covariance-based forest evapotranspiration estimate, which might physiologically be due to stem water storage and stem hydraulic conductivity. The results in this region may have much referential significance for forest evapotranspiration estimation and method evaluation in regions with similar environmental conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. High-resolution estimates of Nubia-Somalia plate motion since 20 Ma from reconstructions of the Southwest Indian Ridge, Red Sea and Gulf of Aden

    NASA Astrophysics Data System (ADS)

    DeMets, C.; Merkouriev, S.

    2016-10-01

    Large gaps and inconsistencies remain in published estimates of Nubia-Somalia plate motion based on reconstructions of seafloor spreading data around Africa. Herein, we use newly available reconstructions of the Southwest Indian Ridge at ˜1-Myr intervals since 20 Ma to estimate Nubia-Somalia plate motion farther back in time than previously achieved and with an unprecedented degree of temporal resolution. At the northern end of the East African rift, our new estimates of Nubia-Somalia motion for six times from 0.78 Ma to 5.2 Ma differ by only 2 per cent from the rift-normal component of motion that is extrapolated from a recently estimated GPS angular velocity. The rate of rift-normal extension thus appears to have remained steady since at least 5.2 Ma. Our new rotations indicate that the two plates have moved relative to each other since at least 16 Ma and possibly longer. Motion has either been steady since at least 16 Ma or accelerated modestly between 6 and 5.2 Ma. Our Nubia-Somalia rotations predict 42.5 ± 3.8 km of rift-normal extension since 10.6 Ma across the well-studied, northern segment of the Main Ethiopian Rift, consistent with 40-50 km estimates for extension since 10.6 Myr based on seismological surveys of this narrow part of the plate boundary. Nubia-Somalia rotations are also derived by combining newly estimated Somalia-Arabia rotations that reconstruct the post-20-Ma opening of the Gulf of Aden with Nubia-Arabia rotations estimated via a probabilistic analysis of plausible opening scenarios for the Red Sea. These rotations predict Nubia-Somalia motion since 5.2 Myr that is consistent with that determined from Southwest Indian Ridge data and also predict 40 ± 3 km of rift-normal extension since 10.6 Ma across the Main Ethiopian Rift, consistent with our 42.5 ± 3.8 km Southwest Indian Ridge estimate. Our new rotations exclude at high confidence level previous estimates of 12 ± 13 and 123 ± 14 km for rift-normal extensions across the Main Ethiopian Rift since 10.6 Ma based on reconstructions of Chron 5n.2 along the Southwest Indian Ridge. Sparse coverage of magnetic reversals older than 16 Ma along the western third of the Southwest Indian Ridge precludes reliable determinations of Nubia-Somalia plate motion before 16 Ma, leaving unanswered the key question of when the motion between the two plates began.

  5. Global seasonal strain and stress models derived from GRACE loading, and their impact on seismicity

    NASA Astrophysics Data System (ADS)

    Chanard, K.; Fleitout, L.; Calais, E.; Craig, T. J.; Rebischung, P.; Avouac, J. P.

    2017-12-01

    Loading by continental water, atmosphere and oceans deforms the Earth at various spatio-temporal scales, inducing crustal and mantelic stress perturbations that may play a role in earthquake triggering.Deformation of the Earth by this surface loading is observed in GNSS position time series. While various models predict well vertical observations, explaining horizontal displacements remains challenging. We model the elastic deformation induced by loading derived from GRACE for coefficients 2 and higher. We estimate the degree-1 deformation field by comparison between predictions of our model and IGS-repro2 solutions at a globally distributed network of 700 GNSS sites, separating the horizontal and vertical components to avoid biases between components. The misfit between model and data is reduced compared to previous studies, particularly on the horizontal component. The associated geocenter motion time series are consistent with results derived from other datasets. We also discuss the impact on our results of systematic errors in GNSS geodetic products, in particular of the draconitic error.We then compute stress tensors time series induced by GRACE loads and discuss the potential link between large scale seasonal mass redistributions and seismicity. Within the crust, we estimate hydrologically induced stresses in the intraplate New Madrid Seismic Zone, where secular stressing rates are unmeasurably low. We show that a significant variation in the rate of micro-earthquakes at annual and multi-annual timescales coincides with stresses induced by hydrological loading in the upper Mississippi embayment, with no significant phase-lag, directly modulating regional seismicity. We also investigate pressure variations in the mantle transition zone and discuss potential correlations between the statistically significant observed seasonality of deep-focus earthquakes, most likely due to mineralogical transformations, and surface hydrological loading.

  6. Gender, position of authority, and the risk of depression and post-traumatic stress disorder among a national sample of U.S. Reserve Component Personnel

    PubMed Central

    Cohen, Gregory H.; Sampson, Laura A.; Fink, David S.; Wang, Jing; Russell, Dale; Gifford, Robert; Fullerton, Carol; Ursano, Robert; Galea, Sandro

    2016-01-01

    BACKGROUND Recent United States military operations in Iraq and Afghanistan have seen dramatic increases in the proportion of women serving, and the breadth of their occupational roles. General population studies suggest that women, compared to men, and persons with lower, as compared to higher, social position may be at greater risk of post-traumatic stress disorder (PTSD) and depression. However, these relations remain unclear in military populations. Accordingly, we aimed to estimate the effects of (1) gender, (2) military authority (i.e., rank) and (3) the interaction of gender and military authority upon: (a) risk of most-recent-deployment-related PTSD, and (b) risk of depression since most-recent-deployment. METHODS Using a nationally representative sample of 1024 previously deployed Reserve Component personnel surveyed in 2010, we constructed multivariable logistic regression models to estimate effects of interest. RESULTS Weighted multivariable logistic regression models demonstrated no statistically significant associations between gender or authority, and either PTSD or depression. Interaction models demonstrated multiplicative statistical interaction between gender and authority for PTSD (beta= −2.37;p=0.01), and depression (beta=-1.21; p=0.057). Predicted probabilities of PTSD and depression, respectively, were lowest in male officers (0.06, 0.09), followed by male enlisted (0.07, 0.14), female enlisted (0.07, 0.15), and female officers (0.30, 0.25). CONCLUSIONS Female officers in the Reserve Component may be at greatest risk for PTSD and depression following deployment, relative to their male and enlisted counterparts, and this relation is not explained by deployment trauma exposure. Future studies may fruitfully examine whether social support, family responsibilities peri-deployment, or contradictory class status may explain these findings. PMID:26899583

  7. Accuracy enhancement of a multivariate calibration for lead determination in soils by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Zaytsev, Sergey M.; Krylov, Ivan N.; Popov, Andrey M.; Zorov, Nikita B.; Labutin, Timur A.

    2018-02-01

    We have investigated matrix effects and spectral interferences on example of lead determination in different types of soils by laser induced breakdown spectroscopy (LIBS). Comparison between analytical performances of univariate and multivariate calibrations with the use of different laser wavelength for ablation (532, 355 and 266 nm) have been reported. A set of 17 soil samples (Ca-rich, Fe-rich, lean soils etc., 8.5-280 ppm of Pb) was involved into construction of the calibration models. Spectral interferences from main components (Ca, Fe, Ti, Mg) and trace components (Mn, Nb, Zr) were estimated by spectra modeling, and they were a reason for significant differences between the univariate calibration models obtained for a three different soil types (black, red, gray) separately. Implementation of 3rd harmonic of Nd:YAG laser in combination with multivariate calibration model based on PCR with 3 principal components provided the best analytical results: the RMSEC has been lowered down to 8 ppm. The sufficient improvement of the relative uncertainty (up to 5-10%) in comparison with univariate calibration was observed at the Pb concentration level > 50 ppm, while the problem of accuracy still remains for some samples with Pb concentration at the 20 ppm level. We have also discussed a few possible ways to estimate LOD without a blank sample. The most rigorous criterion has resulted in LOD of Pb in soils being 13 ppm. Finally, a good agreement between the values of lead content predicted by LIBS (46 ± 5 ppm) and XRF (42.1 ± 3.3 ppm) in the unknown soil sample from Lomonosov Moscow State University area was demonstrated.

  8. Estimated long-term outdoor air pollution concentrations in a cohort study

    NASA Astrophysics Data System (ADS)

    Beelen, Rob; Hoek, Gerard; Fischer, Paul; Brandt, Piet A. van den; Brunekreef, Bert

    Several recent studies associated long-term exposure to air pollution with increased mortality. An ongoing cohort study, the Netherlands Cohort Study on Diet and Cancer (NLCS), was used to study the association between long-term exposure to traffic-related air pollution and mortality. Following on a previous exposure assessment study in the NLCS, we improved the exposure assessment methods. Long-term exposure to nitrogen dioxide (NO 2), nitrogen oxide (NO), black smoke (BS), and sulphur dioxide (SO 2) was estimated. Exposure at each home address ( N=21 868) was considered as a function of a regional, an urban and a local component. The regional component was estimated using inverse distance weighed interpolation of measurement data from regional background sites in a national monitoring network. Regression models with urban concentrations as dependent variables, and number of inhabitants in different buffers and land use variables, derived with a Geographic Information System (GIS), as predictor variables were used to estimate the urban component. The local component was assessed using a GIS and a digital road network with linked traffic intensities. Traffic intensity on the nearest road and on the nearest major road, and the sum of traffic intensity in a buffer of 100 m around each home address were assessed. Further, a quantitative estimate of the local component was estimated. The regression models to estimate the urban component explained 67%, 46%, 49% and 35% of the variances of NO 2, NO, BS, and SO 2 concentrations, respectively. Overall regression models which incorporated the regional, urban and local component explained 84%, 44%, 59% and 56% of the variability in concentrations for NO 2, NO, BS and SO 2, respectively. We were able to develop an exposure assessment model using GIS methods and traffic intensities that explained a large part of the variations in outdoor air pollution concentrations.

  9. Spatio-Temporal Field Estimation Using Kriged Kalman Filter (KKF) with Sparsity-Enforcing Sensor Placement.

    PubMed

    Roy, Venkat; Simonetto, Andrea; Leus, Geert

    2018-06-01

    We propose a sensor placement method for spatio-temporal field estimation based on a kriged Kalman filter (KKF) using a network of static or mobile sensors. The developed framework dynamically designs the optimal constellation to place the sensors. We combine the estimation error (for the stationary as well as non-stationary component of the field) minimization problem with a sparsity-enforcing penalty to design the optimal sensor constellation in an economic manner. The developed sensor placement method can be directly used for a general class of covariance matrices (ill-conditioned or well-conditioned) modelling the spatial variability of the stationary component of the field, which acts as a correlated observation noise, while estimating the non-stationary component of the field. Finally, a KKF estimator is used to estimate the field using the measurements from the selected sensing locations. Numerical results are provided to exhibit the feasibility of the proposed dynamic sensor placement followed by the KKF estimation method.

  10. A modified cluster-sampling method for post-disaster rapid assessment of needs.

    PubMed Central

    Malilay, J.; Flanders, W. D.; Brogan, D.

    1996-01-01

    The cluster-sampling method can be used to conduct rapid assessment of health and other needs in communities affected by natural disasters. It is modelled on WHO's Expanded Programme on Immunization method of estimating immunization coverage, but has been modified to provide (1) estimates of the population remaining in an area, and (2) estimates of the number of people in the post-disaster area with specific needs. This approach differs from that used previously in other disasters where rapid needs assessments only estimated the proportion of the population with specific needs. We propose a modified n x k survey design to estimate the remaining population, severity of damage, the proportion and number of people with specific needs, the number of damaged or destroyed and remaining housing units, and the changes in these estimates over a period of time as part of the survey. PMID:8823962

  11. A unified framework for group independent component analysis for multi-subject fMRI data

    PubMed Central

    Guo, Ying; Pagnoni, Giuseppe

    2008-01-01

    Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105

  12. Quantifying Sources and Fluxes of Aquatic Carbon in U.S. Streams and Reservoirs Using Spatially Referenced Regression Models

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Smith, R. A.; Alexander, R. B.; Schwarz, G. E.

    2004-12-01

    Organic carbon (OC) is a critical water quality characteristic in riverine systems that is an important component of the aquatic carbon cycle and energy balance. Examples of processes controlled by OC interactions are complexation of trace metals; enhancement of the solubility of hydrophobic organic contaminants; formation of trihalomethanes in drinking water; and absorption of visible and UV radiation. Organic carbon also can have indirect effects on water quality by influencing internal processes of aquatic ecosystems (e.g. photosynthesis and autotrophic and heterotrophic activity). The importance of organic matter dynamics on water quality has been recognized, but challenges remain in quantitatively addressing OC processes over broad spatial scales in a hydrological context. In this study, we apply spatially referenced watershed models (SPARROW) to statistically estimate long-term mean-annual rates of dissolved- and total- organic carbon export in streams and reservoirs across the conterminous United States. We make use of a GIS framework for the analysis, describing sources, transport, and transformations of organic matter from spatial databases providing characterizations of climate, land use, primary productivity, topography, soils, and geology. This approach is useful because it illustrates spatial patterns of organic carbon fluxes in streamflow, highlighting hot spots (e.g., organic-rich environments in the southeastern coastal plain). Further, our simulations provide estimates of the relative contributions to streams from allochthonous and autochthonous sources. We quantify surface water fluxes of OC with estimates of uncertainty in relation to the overall US carbon budget; our simulations highlight that aquatic sources and sinks of OC may be a more significant component of regional carbon cycling than was previously thought. Further, we are using our simulations to explore the potential role of climate and other changes in the terrestrial environment on OC fluxes in aquatic systems.

  13. Radioactive isotope analyses of skeletal materials in forensic science: a review of uses and potential uses.

    PubMed

    Cook, Gordon T; MacKenzie, Angus B

    2014-07-01

    A review of information that can be provided from measurements made on natural and anthropogenic radionuclide activities in human skeletal remains has been undertaken to establish what reliable information of forensic anthropological use can be obtained regarding years of birth and death (and hence post-mortem interval (PMI)). Of the anthropogenic radionuclides that have entered the environment, radiocarbon ((14)C) can currently be used to generate the most useful and reliable information. Measurements on single bones can indicate whether or not the person died during the nuclear era, while recent research suggests that measurements on trabecular bone may, depending on the chronological age of the remains, provide estimates of year of death and hence PMI. Additionally, (14)C measurements made on different components of single teeth or on teeth formed at different times can provide estimates of year of birth to within 1-2 years of the true year. Of the other anthropogenic radionuclides, (90)Sr shows some promise but there are problems of (1) variations in activities between individuals, (2) relatively large analytical uncertainties and (3) diagenetic contamination. With respect to natural series radionuclides, it is concluded that there is no convincing evidence that (210)Pb dating can be used in a rigorous, quantitative fashion to establish a PMI. Similarly, for daughter/parent pairs such as (210)Po/(210)Pb (from the (238)U decay series) and (228)Th/(228)Ra (from the (232)Th decay series), the combination of analytical uncertainty and uncertainty in activity ratios at the point of death inevitably results in major uncertainty in any estimate of PMI. However, observation of the disequilibrium between these two daughter/parent pairs could potentially be used in a qualitative way to support other forensic evidence.

  14. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  15. Integrated population modeling reveals the impact of climate on the survival of juvenile emperor penguins.

    PubMed

    Abadi, Fitsum; Barbraud, Christophe; Gimenez, Olivier

    2017-03-01

    Early-life demographic traits are poorly known, impeding our understanding of population processes and sensitivity to climate change. Survival of immature individuals is a critical component of population dynamics and recruitment in particular. However, obtaining reliable estimates of juvenile survival (i.e., from independence to first year) remains challenging, as immatures are often difficult to observe and to monitor individually in the field. This is particularly acute for seabirds, in which juveniles stay at sea and remain undetectable for several years. In this work, we developed a Bayesian integrated population model to estimate the juvenile survival of emperor penguins (Aptenodytes forsteri), and other demographic parameters including adult survival and fecundity of the species. Using this statistical method, we simultaneously analyzed capture-recapture data of adults, the annual number of breeding females, and the number of fledglings of emperor penguins collected at Dumont d'Urville, Antarctica, for the period 1971-1998. We also assessed how climate covariates known to affect the species foraging habitats and prey [southern annular mode (SAM), sea ice concentration (SIC)] affect juvenile survival. Our analyses revealed that there was a strong evidence for the positive effect of SAM during the rearing period (SAMR) on juvenile survival. Our findings suggest that this large-scale climate index affects juvenile emperor penguins body condition and survival through its influence on wind patterns, fast ice extent, and distance to open water. Estimating the influence of environmental covariates on juvenile survival is of major importance to understand the impacts of climate variability and change on the population dynamics of emperor penguins and seabirds in general and to make robust predictions on the impact of climate change on marine predators. © 2016 John Wiley & Sons Ltd.

  16. Common genetic variation and novel loci associated with volumetric mammographic density.

    PubMed

    Brand, Judith S; Humphreys, Keith; Li, Jingmei; Karlsson, Robert; Hall, Per; Czene, Kamila

    2018-04-17

    Mammographic density (MD) is a strong and heritable intermediate phenotype of breast cancer, but much of its genetic variation remains unexplained. We conducted a genetic association study of volumetric MD in a Swedish mammography screening cohort (n = 9498) to identify novel MD loci. Associations with volumetric MD phenotypes (percent dense volume, absolute dense volume, and absolute nondense volume) were estimated using linear regression adjusting for age, body mass index, menopausal status, and six principal components. We also estimated the proportion of MD variance explained by additive contributions from single-nucleotide polymorphisms (SNP-based heritability [h 2 SNP ]) in 4948 participants of the cohort. In total, three novel MD loci were identified (at P < 5 × 10 - 8 ): one for percent dense volume (HABP2) and two for the absolute dense volume (INHBB, LINC01483). INHBB is an established locus for ER-negative breast cancer, and HABP2 and LINC01483 represent putative new breast cancer susceptibility loci, because both loci were associated with breast cancer in available meta-analysis data including 122,977 breast cancer cases and 105,974 control subjects (P < 0.05). h 2 SNP (SE) estimates for percent dense, absolute dense, and nondense volume were 0.29 (0.07), 0.31 (0.07), and 0.25 (0.07), respectively. Corresponding ratios of h 2 SNP to previously observed narrow-sense h 2 estimates in the same cohort were 0.46, 0.72, and 0.41, respectively. These findings provide new insights into the genetic basis of MD and biological mechanisms linking MD to breast cancer risk. Apart from identifying three novel loci, we demonstrate that at least 25% of the MD variance is explained by common genetic variation with h 2 SNP /h 2 ratios varying between dense and nondense MD components.

  17. Comparison of several satellite-derived Sun-Induced Fluorescence products

    NASA Astrophysics Data System (ADS)

    Bacour, C.; Maignan, F.; MacBean, N.; Köhler, P.; Vountas, M.; Khosravi, N.; Guanter, L.; Joiner, J.; Frankenberg, C.; Somkuti, P.; Peylin, P.

    2017-12-01

    Large uncertainties remain in our representation of the global carbon budget, in particular regarding the spatial and temporal dynamics of the net land surface CO2 fluxes along with its two constitutive components, photosynthesis and respiration. Bolstered by the evidenced linear relationship between remotely sensed sun-induced fluorescence (SIF) and plant gross carbon uptake (GPP - gross primary productivity) at broad spatial and temporal scales, satellite SIF products are foreseen to provide significant constraint on one of the key component of the terrestrial carbon cycle, and ultimately to help reducing the uncertainties in the projections of the fate of carbon sinks and sources under a changing climate.Global SIF estimates are now "routinely" produced from observations of space-borne spectrometers having sufficient spectral resolution/sampling in solar Fraunhofer lines or atmospheric absorption bands in the visible - near-infrared domain. Differences between SIF products derived from different instruments are expected depending on evaluated wavelengths (SIF has a spectral signature with maxima around 685 and 740 nm) and their own observation characteristics (time of satellite overpass, spatial resolution, revisit frequency, spectral resolution, etc.). For instance, SIF products estimated at 760 nm (GOSAT, OCO-2) are about 1.5 times lower than estimates at 740 nm (GOME-2, SCIAMACHY). However, as highlighted by Köhler et al. (2015), strong discrepancies in SIF absolute values may arise for products derived from the same set of observations (GOME-2) but using different estimation algorithms. In the view of using satellite SIF products to calibrate terrestrial biosphere models (e.g. through data assimilation), this is highly problematic, especially for evergreen ecosystems where SIF magnitude is the only observational constraint that can be made use of.In this study, we compare several gridded satellite SIF products and quantify their similarities/discrepancies with respect to both their absolute value and seasonality (plant phenology): GOME-2, OCO2, GOSAT, and SCIAMACHY. Our main objective is to assess the potential impacts of their differences in a data assimilation perspective.

  18. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images

    PubMed Central

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He

    2018-01-01

    Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424

  19. Toward Robust Estimation of the Components of Forest Population Change

    Treesearch

    Francis A. Roesch

    2014-01-01

    Multiple levels of simulation are used to test the robustness of estimators of the components of change. I first created a variety of spatial-temporal populations based on, but more variable than, an actual forest monitoring data set and then sampled those populations under a variety of sampling error structures. The performance of each of four estimation approaches is...

  20. Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.

    PubMed

    Manceau, Alain; Marcus, Matthew; Lenoir, Thomas

    2014-09-01

    Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.

  1. Investigating source processes of isotropic events

    NASA Astrophysics Data System (ADS)

    Chiang, Andrea

    This dissertation demonstrates the utility of the complete waveform regional moment tensor inversion for nuclear event discrimination. I explore the source processes and associated uncertainties for explosions and earthquakes under the effects of limited station coverage, compound seismic sources, assumptions in velocity models and the corresponding Green's functions, and the effects of shallow source depth and free-surface conditions. The motivation to develop better techniques to obtain reliable source mechanism and assess uncertainties is not limited to nuclear monitoring, but they also provide quantitative information about the characteristics of seismic hazards, local and regional tectonics and in-situ stress fields of the region . This dissertation begins with the analysis of three sparsely recorded events: the 14 September 1988 US-Soviet Joint Verification Experiment (JVE) nuclear test at the Semipalatinsk test site in Eastern Kazakhstan, and two nuclear explosions at the Chinese Lop Nor test site. We utilize a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long period waveforms and first motion observations provides unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We examine the effects of the free surface on the moment tensor via synthetic testing, and apply the moment tensor based discrimination method to well-recorded chemical explosions. These shallow chemical explosions represent rather severe source-station geometry in terms of the vanishing traction issues. We show that the combined waveform and first motion method enables the unique discrimination of these events, even though the data include unmodeled single force components resulting from the collapse and blowout of the quarry face immediately following the initial explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve the fit to the data. When we apply the 3D model to real data, at long periods (20-50 seconds), we see good agreement in the solutions between the 1D and 3D models and slight improvement in waveform fits when using the 3D velocity model Green's functions. (Abstract shortened by ProQuest.).

  2. A New Estimate for Total Offset on the Southern San Andreas Fault: Implications for Cumulative Plate Boundary Shear in the Northern Gulf of California

    NASA Astrophysics Data System (ADS)

    Darin, M. H.; Dorsey, R. J.

    2012-12-01

    Development of a consistent and balanced tectonic reconstruction for the late Cenozoic San Andreas fault (SAF) in southern California has been hindered for decades by incompatible estimates of total dextral offset based on different geologic cross-fault markers. The older estimate of 240-270 km is based on offset fluvial conglomerates of the middle Miocene Mint Canyon and Caliente Formations west of the SAF from their presumed source area in the northern Chocolate Mountains NE of the SAF (Ehlig et al., 1975; Ehlert, 2003). The second widely cited offset marker is a distinctive Triassic megaporphyritic monzogranite that has been offset 160 ± 10 km between Liebre Mountain west of the SAF and the San Bernadino Mountains (Matti and Morton, 1993). In this analysis we use existing paleocurrent data and late Miocene clockwise rotation in the eastern Transverse Ranges (ETR) to re-assess the orientation of the piercing line used in the 240 km-correlation, and present a palinspastic reconstruction that satisfies all existing geologic constraints. Our reconstruction of the Mint Canyon piercing line reduces the original estimate of 240-270 km to 195 ± 15 km of cumulative right-lateral slip on the southern SAF (sensu stricto), which is consistent with other published estimates of 185 ± 20 km based on correlative basement terranes in the Salton Trough region. Our estimate of ~195 km is consistent with the lower estimate of ~160 km on the Mojave segment because transform-parallel extension along the southwestern boundary of the ETR during transrotation produces ~25-40 km of displacement that does not affect offset markers of the Liebre/San Bernadino correlation located northwest of the ETR rotating domain. Reconciliation of these disparate estimates places an important new constraint on the total plate boundary shear that is likely accommodated in the adjacent northern Gulf of California. Global plate circuit models require ~650 km of cumulative Pacific-North America (PAC-NAM) relative plate motion since ~12 Ma (Atwater and Stock, 1998). We propose that the continental component of PAC-NAM shear is accommodated by: (1) 195 ± 15 km on the southern SAF (this study); (2) 12 ± 2 km on the Whittier-Elsinore fault; (3) 75 ± 20 km of cumulative shear across the central Mojave in the eastern California shear zone; (4) 30 ± 4 km of post-13 Ma slip on the Stateline fault; and (5) 47 ± 18 km of NW-directed translation produced by north-south shortening. Together, these components sum to 359 ± 31 km of net dextral displacement on the SAF system (sensu lato) in southern California since ca. 12 Ma, or ~300 km less than what is required by the global plate circuit. This suggests that the continental component of post-12 Ma PAC-NAM transform motion can be no more than ~390 km in the adjacent northern Gulf of California, substantially less than the 450 km of shear proposed in some models. We suggest that the remaining ~270-300 km of NW-directed relative plate motion is accommodated by a small component of late Miocene extension and roughly 225 km of slip on the offshore borderland fault system west of Baja California.

  3. Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold

    2007-04-15

    This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.

  4. External validation of the Revised Cardiac Risk Index and update of its renal variable to predict 30-day risk of major cardiac complications after non-cardiac surgery: rationale and plan for analyses of the VISION study.

    PubMed

    Roshanov, Pavel S; Walsh, Michael; Devereaux, P J; MacNeil, S Danielle; Lam, Ngan N; Hildebrand, Ainslie M; Acedillo, Rey R; Mrkobrada, Marko; Chow, Clara K; Lee, Vincent W; Thabane, Lehana; Garg, Amit X

    2017-01-09

    The Revised Cardiac Risk Index (RCRI) is a popular classification system to estimate patients' risk of postoperative cardiac complications based on preoperative risk factors. Renal impairment, defined as serum creatinine >2.0 mg/dL (177 µmol/L), is a component of the RCRI. The estimated glomerular filtration rate has become accepted as a more accurate indicator of renal function. We will externally validate the RCRI in a modern cohort of patients undergoing non-cardiac surgery and update its renal component. The Vascular Events in Non-cardiac Surgery Patients Cohort Evaluation (VISION) study is an international prospective cohort study. In this prespecified secondary analysis of VISION, we will test the risk estimation performance of the RCRI in ∼34 000 participants who underwent elective non-cardiac surgery between 2007 and 2013 from 29 hospitals in 15 countries. Using data from the first 20 000 eligible participants (the derivation set), we will derive an optimal threshold for dichotomising preoperative renal function quantified using the Chronic Kidney Disease Epidemiology Collaboration (CKD-Epi) glomerular filtration rate estimating equation in a manner that preserves the original structure of the RCRI. We will also develop a continuous risk estimating equation integrating age and CKD-Epi with existing RCRI risk factors. In the remaining (approximately) 14 000 participants, we will compare the risk estimation for cardiac complications of the original RCRI to this modified version. Cardiac complications will include 30-day non-fatal myocardial infarction, non-fatal cardiac arrest and death due to cardiac causes. We have examined an early sample to estimate the number of events and the distribution of predictors and missing data, but have not seen the validation data at the time of writing. The research ethics board at each site approved the VISION protocol prior to recruitment. We will publish our results and make our models available online at http://www.perioperativerisk.com. ClinicalTrials.gov NCT00512109. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Determination of power system component parameters using nonlinear dead beat estimation method

    NASA Astrophysics Data System (ADS)

    Kolluru, Lakshmi

    Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.

  6. The influence of preburial insect access on the decomposition rate.

    PubMed

    Bachmann, Jutta; Simmons, Tal

    2010-07-01

    This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p < 0.001) in decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.

  7. Dynamical Geochemistry

    NASA Astrophysics Data System (ADS)

    Davies, G. F.

    2009-12-01

    Dynamical and chemical interpretations of the mantle have hitherto remained incompatible, despite substantial progress over recent years. It is argued that both the refractory incompatible elements and the noble gases can be reconciled with the dynamical mantle when mantle heterogeneity is more fully accounted for. It is argued that the incompatible-element content of the MORB source is about double recent estimates (U~10 ng/g) because enriched components have been systematically overlooked, for three main reasons. (1) in a heterogeneous MORB source, melts from enriched pods are not expected to equilibrate fully with the peridotite matrix, but recent estimates of MORB-source composition have been tied to residual (relatively infertile) peridotite composition. (2) about 25% of the MORB source comes from plumes, but plume-like components have tended to be excluded. (3) a focus on the most common “normal” MORBs, allegedly representing a “depleted” MORB source, has overlooked the less-common but significant enriched components of MORBs, of various possible origins. Geophysical constraints (seismological and topographic) exclude mantle layering except for the thin D” layer and the “superpiles” under Africa and the Pacific. Numerical models then indicate the MORB source comprises the rest of the mantle. Refractory-element mass balances can then be accommodated by a MORB source depleted by only a factor of 2 from chondritic abundances, rather than a factor of 4-7. A source for the hitherto-enigmatic unradiogenic helium in OIBs also emerges from this picture. Melt from subducted oceanic crust melting under MORs will react with surrounding peridotite to form intemediate compositions here termed hybrid pyroxenite. Only about half of the hybrid pyroxenite will be remelted, extracted and degassed at MORs, and the rest will recirculate within the mantle. Over successive generations starting early in Earth history, volatiles will come to reside mainly in the hybrid pyroxenite. This will be denser than average mantle and will tend to accumulate in D”, like subducted oceanic crust. Because residence times in D” are longer, it will degas more slowly. Thus plumes will tap a mixture of older, less-degassed hybrid pyroxenite, containing less-radiogenic noble gases, and degassed former oceanic crust. Calculations of degassing history confirm that this picture can quantitatively account for He, Ne and Ar in MORBs and OIBs. Geophysically-based dynamical models have been shown over recent years to account quantitatively for the isotopes of refractory incompatible elements. This can now be extended to noble gas isotopes. The remaining significant issue is that thermal evolution calculations require more radiogenic heating than implied by cosmochemical estimates of radioactive heat sources. This may imply that tectonic and thermal evolution have been more episodic in the Phanerozoic than has been generally recognised.

  8. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  9. Estimating mean long-term hydrologic budget components for watersheds and counties: An application to the commonwealth of Virginia, USA

    USGS Publications Warehouse

    Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.

    2015-01-01

    Mean long-term hydrologic budget components, such as recharge and base flow, are often difficult to estimate because they can vary substantially in space and time. Mean long-term fluxes were calculated in this study for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow using long-term estimates of mean ET and precipitation and the assumption that the relative change in storage over that 30-year period is small compared to the total ET or precipitation. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance (SC) data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971-2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. A new approach to estimate riparian ET using seasonal SC data gave results consistent with those from other methods. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia. The method has the potential to be applied in many other states in the U.S. or in other regions or countries of the world where climate and stream flow data are plentiful.

  10. Estimating individual influences of behavioral intentions: an application of random-effects modeling to the theory of reasoned action.

    PubMed

    Hedeker, D; Flay, B R; Petraitis, J

    1996-02-01

    Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.

  11. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  12. Gas composition sensing using carbon nanotube arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor)

    2008-01-01

    A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.

  13. High-resolution estimates of Nubia-Somalia plate motion since 20 Ma from reconstructions of the Southwest Indian Ridge, Red Sea, and Gulf of Aden

    NASA Astrophysics Data System (ADS)

    DeMets, C.; Merkuryev, S. A.

    2015-12-01

    We estimate Nubia-Somalia rotations at ~1-Myr intervals for the past 20 Myr from newly available, high-resolution reconstructions of the Southwest Indian Ridge and reconstructions of the Red Sea and Gulf of Aden. The former rotations are based on many more data, extend farther back in time, and have more temporal resolution than has previously been the case. Nubia-Somalia plate motion has remained remarkably steady since 5.2 Ma. For example, at the northern end of the East Africa rift, our Nubia-Somalia plate motion estimates at six different times between 0.78 Ma and 5.2 Ma agree to within 3% with the rift-normal component of motion that is extrapolated from the recently estimated Saria et al. (2014) GPS angular velocity. Over the past 10.6 Myr, the Nubia-Somalia rotations predict 42±4 km of rift-normal extension across the northern segment of the Main Ethiopian Rift. This agrees with approximate minimum and maximum estimates of 40 km and 53 km for post-10.6-Myr extension from seismological surveys of this narrow part of the plate boundary and is also close to 55-km and 48±3 km estimates from published and our own reconstructions of the Nubia-Arabia and Somalia-Arabia seafloorspreading histories for the Red Sea and Gulf of Aden. Our new rotations exclude at high confidence level two previously published estimates of Nubia-Somalia motion based on inversions of Chron 5n.2 along the Southwest Indian Ridge, which predict rift-normal extensions of 13±14 km and 129±16 km across the Main Ethiopian Rift since 11 Ma. Constraints on Nubia-Somalia motion before ~15 Ma are weaker due to sparse coverage of pre-15-Myr magnetic reversals along the Nubia-Antarctic plate boundary, but appear to require motion before 15 Ma. Nubia-Somalia rotations that we estimate from a probabilistic analysis of geometric and age constraints from the Red Sea and Gulf of Aden are consistent with those determined from Southwest Indian Ridge data, particularly for the past 11 Myr. Nubia-Somalia rotations determined from the Red Sea/Gulf of Aden rotations and Southwest Indian Ridge rotations independently predict that motion during its oldest phase was highly oblique to the rift and a factor-of-two or more faster than at present, although large uncertainties remain in the rotation estimates for times before ~15 Ma.

  14. The Galactic Nova Rate Revisited

    NASA Astrophysics Data System (ADS)

    Shafter, A. W.

    2017-01-01

    Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.

  15. The Breslow estimator of the nonparametric baseline survivor function in Cox's regression model: some heuristics.

    PubMed

    Hanley, James A

    2008-01-01

    Most survival analysis textbooks explain how the hazard ratio parameters in Cox's life table regression model are estimated. Fewer explain how the components of the nonparametric baseline survivor function are derived. Those that do often relegate the explanation to an "advanced" section and merely present the components as algebraic or iterative solutions to estimating equations. None comment on the structure of these estimators. This note brings out a heuristic representation that may help to de-mystify the structure.

  16. Advanced Monitoring to Improve Combustion Turbine/Combined Cycle Reliability, Availability & Maintainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leonard Angello

    2005-09-30

    Power generators are concerned with the maintenance costs associated with the advanced turbines that they are purchasing. Since these machines do not have fully established Operation and Maintenance (O&M) track records, power generators face financial risk due to uncertain future maintenance costs. This risk is of particular concern, as the electricity industry transitions to a competitive business environment in which unexpected O&M costs cannot be passed through to consumers. These concerns have accelerated the need for intelligent software-based diagnostic systems that can monitor the health of a combustion turbine in real time and provide valuable information on the machine's performancemore » to its owner/operators. EPRI, Impact Technologies, Boyce Engineering, and Progress Energy have teamed to develop a suite of intelligent software tools integrated with a diagnostic monitoring platform that, in real time, interpret data to assess the 'total health' of combustion turbines. The 'Combustion Turbine Health Management System' (CTHMS) will consist of a series of 'Dynamic Link Library' (DLL) programs residing on a diagnostic monitoring platform that accepts turbine health data from existing monitoring instrumentation. CTHMS interprets sensor and instrument outputs, correlates them to a machine's condition, provide interpretative analyses, project servicing intervals, and estimate remaining component life. In addition, the CTHMS enables real-time anomaly detection and diagnostics of performance and mechanical faults, enabling power producers to more accurately predict critical component remaining useful life and turbine degradation.« less

  17. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  18. Reliability of reflectance measures in passive filters

    NASA Astrophysics Data System (ADS)

    Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.

    2014-08-01

    Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.

  19. Respiratory hospitalizations in association with fine PM and its ...

    EPA Pesticide Factsheets

    Despite observed geographic and temporal variation in particulate matter (PM)-related health morbidities, only a small number of epidemiologic studies have evaluated the relation between PM2.5 chemical constituents and respiratory disease. Most assessments are limited by inadequate spatial and temporal resolution of ambient PM measurements and/or by their approaches to examine the role of specific PM components on health outcomes. In a case-crossover analysis using daily average ambient PM2.5 total mass and species estimates derived from the Community Multiscale Air Quality (CMAQ) model and available observations, we examined the association between the chemical components of PM (including elemental and organic carbon, sulfate, nitrate, ammonium, and other remaining) and respiratory hospitalizations in New York State. We evaluated relationships between levels (low, medium, high) of PM constituent mass fractions, and assessed modification of the PM2.5–hospitalization association via models stratified by mass fractions of both primary and secondary PM components. In our results, average daily PM2.5 concentrations in New York State were generally lower than the 24-hr average National Ambient Air Quality Standard (NAAQS). Year-round analyses showed statistically significant positive associations between respiratory hospitalizations and PM2.5 total mass, sulfate, nitrate, and ammonium concentrations at multiple exposure lags (0.5–2.0% per interquartile range [IQR

  20. Connecting the irreversible capacity loss in Li-ion batteries with the electronic insulating properties of solid electrolyte interphase (SEI) components.

    DOE PAGES

    Leung, Kevin; Lin, Yu -Xiao; Liu, Zhe; ...

    2016-01-01

    The formation and continuous growth of a solid electrolyte interphase (SEI) layer are responsible for the irreversible capacity loss of batteries in the initial and subsequent cycles, respectively. In this article, the electron tunneling barriers from Li metal through three insulating SEI components, namely Li 2CO 3, LiF and Li 3PO 4, are computed by density function theory (DFT) approaches. Based on electron tunneling theory, it is estimated that sufficient to block electron tunneling. It is also found that the band gap decreases under tension while the work function remains the same, and thus the tunneling barrier decreases under tensionmore » and increases under compression. A new parameter, η, characterizing the average distances between anions, is proposed to unify the variation of band gap with strain under different loading conditions into a single linear function of η. An analytical model based on the tunneling results is developed to connect the irreversible capacity loss, due to the Li ions consumed in forming these SEI component layers on the surface of negative electrodes. As a result, the agreement between the model predictions and experimental results suggests that only the initial irreversible capacity loss is due to the self-limiting electron tunneling property of the SEI.« less

  1. Aerosol mass spectrometric features of biogenic SOA: observations from a plant chamber and in rural atmospheric environments.

    PubMed

    Kiendler-Scharr, Astrid; Zhang, Qi; Hohaus, Thorsten; Kleist, Einhard; Mensah, Amewu; Mentel, Thomas F; Spindler, Christian; Uerlings, Ricarda; Tillmann, Ralf; Wildt, Jürgen

    2009-11-01

    Secondary organic aerosol (SOA) is known to form from a variety of anthropogenic and biogenic precursors. Current estimates of global SOA production vary over 2 orders of magnitude. Since no direct measurement technique for SOA exists, quantifying SOA remains a challenge for atmospheric studies. The identification of biogenic SOA (BSOA) based on mass spectral signatures offers the possibility to derive source information of organic aerosol (OA) with high time resolution. Here we present data from simulation experiments. The BSOA from tree emissions was characterized with an Aerodyne quadrupole aerosol mass spectrometer (Q-AMS). Collection efficiencies were close to 1, and effective densities of the BSOA were found to be 1.3 +/- 0.1 g/cm(3). The mass spectra of SOA from different trees were found to be highly similar. The average BSOA mass spectrum from tree emissions is compared to a BSOA component spectrum extracted from field data. It is shown that overall the spectra agree well and that the mass spectral features of BSOA are distinctively different from those of OA components related to fresh fossil fuel and biomass combustions. The simulation chamber mass spectrum may potentially be useful for the identification and interpretation of biogenic SOA components in ambient data sets.

  2. Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Shelhamer, M.

    2001-01-01

    It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.

  3. Velocity spectrum for the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Bastami, Morteza; Soghrat, M. R.

    2018-01-01

    Peak ground acceleration (PGA) and spectral acceleration values have been proposed in most building codes/guidelines, unlike spectral velocity (SV) and peak ground velocity (PGV). Recent studies have demonstrated the importance of spectral velocity and peak ground velocity in the design of long period structures (e.g., pipelines, tunnels, tanks, and high-rise buildings) and evaluation of seismic vulnerability in underground structures. The current study was undertaken to develop a velocity spectrum and for estimation of PGV. In order to determine these parameters, 398 three-component accelerograms recorded by the Building and Housing Research Center (BHRC) were used. The moment magnitude (Mw) in the selected database was 4.1 to 7.3, and the events occurred after 1977. In the database, the average shear-wave velocity at 0 to 30 m in depth (Vs30) was available for only 217 records; thus, the site class for the remaining was estimated using empirical methods. Because of the importance of the velocity spectrum at low frequencies, the signal-to-noise ratio of 2 was chosen for determination of the low and high frequency to include a wider range of frequency content. This value can produce conservative results. After estimation of the shape of the velocity design spectrum, the PGV was also estimated for the region under study by finding the correlation between PGV and spectral acceleration at the period of 1 s.

  4. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  5. Environmental Inequality in Exposures to Airborne Particulate Matter Components in the United States

    PubMed Central

    Ebisu, Keita

    2012-01-01

    Background: Growing evidence indicates that toxicity of fine particulate matter ≤ 2.5 μm in diameter (PM2.5) differs by chemical component. Exposure to components may differ by population. Objectives: We investigated whether exposures to PM2.5 components differ by race/ethnicity, age, and socioeconomic status (SES). Methods: Long-term exposures (2000 through 2006) were estimated for 215 U.S. census tracts for PM2.5 and for 14 PM2.5 components. Population-weighted exposures were combined to generate overall estimated exposures by race/ethnicity, education, poverty status, employment, age, and earnings. We compared population characteristics for tracts with and without PM2.5 component monitors. Results: Larger disparities in estimated exposures were observed for components than for PM2.5 total mass. For race/ethnicity, whites generally had the lowest exposures. Non-Hispanic blacks had higher exposures than did whites for 13 of the 14 components. Hispanics generally had the highest exposures (e.g., 152% higher than whites for chlorine, 94% higher for aluminum). Young persons (0–19 years of age) had levels as high as or higher than other ages for all exposures except sulfate. Persons with lower SES had higher estimated exposures, with some exceptions. For example, a 10% increase in the proportion unemployed was associated with a 20.0% increase in vanadium and an 18.3% increase in elemental carbon. Census tracts with monitors had more non-Hispanic blacks, lower education and earnings, and higher unemployment and poverty than did tracts without monitors. Conclusions: Exposures to PM2.5 components differed by race/ethnicity, age, and SES. If some components are more toxic than others, certain populations are likely to suffer higher health burdens. Demographics differed between populations covered and not covered by monitors. PMID:22889745

  6. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    NASA Astrophysics Data System (ADS)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  7. 15 CFR 90.8 - Evidence required.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DEPARTMENT OF COMMERCE PROCEDURE FOR CHALLENGING POPULATION ESTIMATES § 90.8 Evidence required. (a) The... the criteria, standards, and regular processes the Census Bureau employs to generate the population... uses a cohort-component of change method to produce population estimates. Each year, the components of...

  8. Physicochemical assessment criteria for high-voltage pulse capacitors

    NASA Astrophysics Data System (ADS)

    Darian, L. A.; Lam, L. Kh.

    2016-12-01

    In the paper, the applicability of decomposition products of internal insulation of high-voltage pulse capacitors is considered (aging is the reason for decomposition products of internal insulation). Decomposition products of internal insulation of high-voltage pulse capacitors can be used to evaluate their quality when in operation and in service. There have been three generations of markers of aging of insulation as in the case with power transformers. The area of applicability of markers of aging of insulation for power transformers has been studied and the area can be extended to high-voltage pulse capacitors. The research reveals that there is a correlation between the components and quantities of markers of aging of the first generation (gaseous decomposition products of insulation) dissolved in insulating liquid and the remaining life of high-voltage pulse capacitors. The application of markers of aging to evaluate the remaining service life of high-voltage pulse capacitor is a promising direction of research, because the design of high-voltage pulse capacitors keeps stability of markers of aging of insulation in high-voltage pulse capacitors. It is necessary to continue gathering statistical data concerning development of markers of aging of the first generation. One should also carry out research aimed at estimation of the remaining life of capacitors using markers of the second and the third generation.

  9. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  10. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  11. Anisotropic Velocities of Gas Hydrate-Bearing Sediments in Fractured Reservoirs

    USGS Publications Warehouse

    Lee, Myung W.

    2009-01-01

    During the Indian National Gas Hydrate Program Expedition 01 (NGHP-01), one of the richest marine gas hydrate accumulations was discovered at drill site NGHP-01-10 in the Krishna-Godavari Basin, offshore of southeast India. The occurrence of concentrated gas hydrate at this site is primarily controlled by the presence of fractures. Gas hydrate saturations estimated from P- and S-wave velocities, assuming that gas hydrate-bearing sediments (GHBS) are isotropic, are much higher than those estimated from the pressure cores. To reconcile this difference, an anisotropic GHBS model is developed and applied to estimate gas hydrate saturations. Gas hydrate saturations estimated from the P-wave velocities, assuming high-angle fractures, agree well with saturations estimated from the cores. An anisotropic GHBS model assuming two-component laminated media - one component is fracture filled with 100-percent gas hydrate, and the other component is the isotropic water-saturated sediment - adequately predicts anisotropic velocities at the research site.

  12. A real time neural net estimator of fatigue life

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1990-01-01

    A neural net architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the Intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopump blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines. This neural net estimator of fatigue life is seen as the enabling technology to achieve component life prognosis, and therefore would be an important part of life extending control for reusable rocket engines.

  13. Estimating Single-Trial Responses in EEG

    NASA Technical Reports Server (NTRS)

    Shah, A. S.; Knuth, K. H.; Truccolo, W. A.; Mehta, A. D.; Fu, K. G.; Johnston, T. A.; Ding, M.; Bressler, S. L.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate characterization of single-trial field potential responses is critical from a number of perspectives. For example, it allows differentiation of an evoked response from ongoing EEG. We previously developed the multiple component Event Related Potential (mcERP) algorithm to improve resolution of the single-trial evoked response. The mcERP model states that multiple components, each specified by a stereotypic waveform varying in latency and amplitude from trial to trial, comprise the evoked response. Application of the mcERP algorithm to simulated data with three independent, synthetic components has shown that the model is capable of separating these components and estimating their variability. Application of the model to single trial, visual evoked potentials recorded simultaneously from all V1 laminae in an awake, fixating macaque yielded local and far-field components. Certain local components estimated by the model were distributed in both granular and supragranular laminae. This suggests a linear coupling between the responses of thalamo-recipient neuronal ensembles and subsequent responses of supragranular neuronal ensembles, as predicted by the feedforward anatomy of V1. Our results indicate that the mcERP algorithm provides a valid estimation of single-trial responses. This will enable analyses that depend on trial-to-trial variations and those that require separation of the evoked response from background EEG rhythms

  14. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency

    DTIC Science & Technology

    2015-12-01

    Enhance Transparency Report to Congressional Committees December 2015 GAO-16-23 United States Government Accountability Office United...SUSTAINMENT Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency Why GAO Did This Study DOD and DOE are...modernization plans and (2) complete, transparent information on the methodologies used to develop those estimates. GAO analyzed the departments

  15. Application of self-preservation in the diurnal evolution of the surface energy budget to determine daily evaporation

    NASA Technical Reports Server (NTRS)

    Brutsaert, Wilfried; Sugita, Michiaki

    1992-01-01

    Evaporation from natural land surfaces often exhibits a strong variation during the course of a day, mostly in response to the daily variation of radiative energy input at the surface. This makes it difficult to derive the total daily evaporation, when only one or a few instantaneous estimates of evaporation are available. It is often possible to resolve this difficulty by assuming self-preservation in the diurnal evolution of the surface energy budget. Thus if the relative partition of total incoming energy flux among the different components remains the same, the ratio of latent heat flux and any other flux component can be taken as constant through the day. This concept of constant flux ratios is tested by means of data obtained during the First ISLSCP Field Experiment; the instantaneous evaporation values were calculated by means of the atmospheric boundary layer bulk similarity approach with radiosonde profiles and radiative surface temperatures. Good results were obtained for evaporative flux ratios with available energy flux, with net radiation, and with incoming shortwave radiation.

  16. Analytical treatment of the deformation behavior of EUVL masks during electrostatic chucking

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-03-01

    A new analytical approach is presented to predict mask deformation during electro-static chucking in next generation extreme-ultraviolet-lithography (EUVL). Given an arbitrary profile measurement of the mask and chuck non-flatness, this method has been developed as an alternative to time-consuming finite element simulations for overlay error correction algorithms. We consider the feature transfer of each harmonic component in the profile shapes via linear elasticity theory and demonstrate analytically how high spatial frequencies are filtered. The method is compared to presumably more accurate finite element simulations and has been tested successfully in an overlay error compensation experiment, where the residual error y-component could be reduced by a factor 2. As a side outcome, the formulation provides a tool to estimate the critical pin-size and -pitch such that the distortion on the mask front-side remains within given tolerances. We find for a numerical example that pin-pitches of less than 5 mm will result in a mask pattern-distortion of less than 1 nm if the chucking pressure is below 30 kPa.

  17. Analytical treatment of the deformation behavior of extreme-ultraviolet-lithography masks during electrostatic chucking

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-10-01

    A new analytical approach is presented to predict mask deformation during electrostatic chucking in next-generation extreme-ultraviolet-lithography. Given an arbitrary profile measurement of the mask and chuck nonflatness, this method has been developed as an alternative to time-consuming finite element simulations for overlay error correction algorithms. We consider the feature transfer of each harmonic component in the profile shapes via linear elasticity theory and demonstrate analytically how high spatial frequencies are filtered. The method is compared to presumably more accurate finite element simulations and has been tested successfully in an overlay error compensation experiment, where the residual error y-component could be reduced by a factor of 2. As a side outcome, the formulation provides a tool to estimate the critical pin-size and -pitch such that the distortion on the mask front-side remains within given tolerances. We find for a numerical example that pin-pitches of less than 5 mm will result in a mask pattern distortion of less than 1 nm if the chucking pressure is below 30 kPa.

  18. Post-flight Analysis of the Argon Filled Ion Chamber

    NASA Technical Reports Server (NTRS)

    Tai, H.; Goldhagen, P.; Jones, I. W.; Wilson, J. W.; Maiden, D. L.; Shinn, J. L.

    2003-01-01

    Atmospheric ionizing radiation is a complex mixture of primary galactic and solar cosmic rays and a multitude of secondary particles produced in collision with air nuclei. The first series of Atmospheric Ionizing Radiation (AIR) measurement flights on the NASA research aircraft ER-2 took place in June 1997. The ER-2 flight package consisted of fifteen instruments from six countries and were chosen to provide varying sensitivity to specific components. These AIR ER-2 flight measurements are to characterize the AIR environment during solar minimum to allow the continued development of environmental models of this complex mixture of ionizing radiation. This will enable scientists to study the ionizing radiation health hazard associated with the high-altitude operation of a commercial supersonic transport and to allow estimates of single event upsets for advanced avionics systems design. The argon filled ion chamber representing about 40 percent of the contributions to radiation risks are analyzed herein and model discrepancies for solar minimum environment are on the order of 5 percent and less. Other biologically significant components remain to be analyzed.

  19. Extraction of the brachialis muscle activity using HD-sEMG technique and canonical correlation analysis.

    PubMed

    Al Harrach, M; Afsharipour, B; Boudaoud, S; Carriou, V; Marin, F; Merletti, R

    2016-08-01

    The Brachialis (BR) is placed under the Biceps Brachii (BB) deep in the upper arm. Therefore, the detection of the corresponding surface Electromyogram (sEMG) is a complex task. The BR is an important elbow flexor, but it is usually not considered in the sEMG based force estimation process. The aim of this study was to attempt to separate the two sEMG activities of the BR and the BB by using a High Density sEMG (HD-sEMG) grid placed at the upper arm and Canonical Component Analysis (CCA) technique. For this purpose, we recorded sEMG signals from seven subjects with two 8 × 4 electrode grids placed over BB and BR. Four isometric voluntary contraction levels were recorded (5, 10, 30 and 50 %MVC) for 90° elbow angle. Then using CCA and image processing tools the sources of each muscle activity were separated. Finally, the corresponding sEMG signals were reconstructed using the remaining canonical components in order to retrieve the activity of the BB and the BR muscles.

  20. MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES

    PubMed Central

    Su, Yu-Ru; Wang, Jane-Ling

    2018-01-01

    There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122

  1. UncertiantyQuantificationinTsunamiEarlyWarningCalculations

    NASA Astrophysics Data System (ADS)

    Anunziato, Alessandro

    2016-04-01

    The objective of the Tsunami calculations is the estimation of the impact of waves caused by large seismic events on the coasts and the determination of potential inundation areas. In the case of Early Warning Systems, i.e. systems that should allow to anticipate the possible effects and give the possibility to react consequently (i.e. issue evacuation of areas at risk), this must be done in very short time (minutes) to be effective. In reality, the above estimation includes several uncertainty factors which make the prediction extremely difficult. The quality of the very first estimations of the seismic parameters is not very precise: the uncertainty in the determination of the seismic components (location, magnitude and depth) decreases with time because as time passes it is possible to use more and more seismic signals and the event characterization becomes more precise. On the other hand other parameters that are necessary to establish for the performance of a calculation (i.e. fault mechanism) are difficult to estimate accurately also after hours (and in some cases remain unknown) and therefore this uncertainty remains in the estimated impact evaluations; when a quick tsunami calculation is necessary (early warning systems) the possibility to include any possible future variation of the conditions to establish the "worst case scenario" is particularly important. The consequence is that the number of uncertain parameters is so large that it is not easy to assess the relative importance of each of them and their effect on the predicted results. In general the complexity of system computer codes is generated by the multitude of different models which are assembled into a single program to give the global response for a particular phenomenon. Each of these model has associated a determined uncertainty coming from the application of that model to single cases and/or separated effect test cases. The difficulty in the prediction of a Tsunami calculation response is additionally increased by the not perfect knowledge of the initial and boundary conditions so that the response can change even with small variations of the input. The paper analyses a number of potential events in the Mediterranean Sea and in the Atlantic Ocean and for each of them a large number of calculations is performed (Monte Carlo simulation) in order to identify the relative importance of each of the uncertain parameter that is adopted. It is shown that even if after several hours the variation on the estimate is reduces, still remains and in some cases it can lead to different conclusions if this information is used as alerting method. The cases considered are: a mild event in the Hellenic arc (Mag. 6.9), a relatively medium event in Algeria (Mag. 7.2) and a quite relevant event in the Gulf of Cadiz (Mag. 8.2).

  2. Transmission overhaul estimates for partial and full replacement at repair

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1991-01-01

    Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.

  3. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  4. An Autophagic Flux Probe that Releases an Internal Control.

    PubMed

    Kaizuka, Takeshi; Morishita, Hideaki; Hama, Yutaro; Tsukamoto, Satoshi; Matsui, Takahide; Toyota, Yuichiro; Kodama, Akihiko; Ishihara, Tomoaki; Mizushima, Tohru; Mizushima, Noboru

    2016-11-17

    Macroautophagy is an intracellular degradation system that utilizes the autophagosome to deliver cytoplasmic components to the lysosome. Measuring autophagic activity is critically important but remains complicated and challenging. Here, we have developed GFP-LC3-RFP-LC3ΔG, a fluorescent probe to evaluate autophagic flux. This probe is cleaved by endogenous ATG4 proteases into equimolar amounts of GFP-LC3 and RFP-LC3ΔG. GFP-LC3 is degraded by autophagy, while RFP-LC3ΔG remains in the cytosol, serving as an internal control. Thus, autophagic flux can be estimated by calculating the GFP/RFP signal ratio. Using this probe, we re-evaluated previously reported autophagy-modulating compounds, performed a high-throughput screen of an approved drug library, and identified autophagy modulators. Furthermore, we succeeded in measuring both induced and basal autophagic flux in embryos and tissues of zebrafish and mice. The GFP-LC3-RFP-LC3ΔG probe is a simple and quantitative method to evaluate autophagic flux in cultured cells and whole organisms. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  6. Family-based hip-hop to health: outcome results.

    PubMed

    Fitzgibbon, Marian L; Stolley, Melinda R; Schiffer, Linda; Kong, Angela; Braunschweig, Carol L; Gomez-Perez, Sandra L; Odoms-Young, Angela; Van Horn, Linda; Christoffel, Katherine Kaufer; Dyer, Alan R

    2013-02-01

    This pilot study tested the feasibility of Family-Based Hip-Hop to Health, a school-based obesity prevention intervention for 3-5-year-old Latino children and their parents, and estimated its effectiveness in producing smaller average changes in BMI at 1-year follow-up. Four Head Start preschools administered through the Chicago Public Schools were randomly assigned to receive a Family-Based Intervention (FBI) or a General Health Intervention (GHI). Parents signed consent forms for 147 of the 157 children enrolled. Both the school-based and family-based components of the intervention were feasible, but attendance for the parent intervention sessions was low. Contrary to expectations, a downtrend in BMI Z-score was observed in both the intervention and control groups. While the data reflect a downward trend in obesity among these young Hispanic children, obesity rates remained higher at 1-year follow-up (15%) than those reported by the National Health and Nutrition Examination Survey (2009-2010) for 2-5-year-old children (12.1%). Developing evidence-based strategies for obesity prevention among Hispanic families remains a challenge. Copyright © 2012 The Obesity Society.

  7. Family-Based Hip-Hop to Health: Outcome Results

    PubMed Central

    Fitzgibbon, M. L.; Stolley, M. R.; Schiffer, L.; Kong, A.; Braunschweig, C. L.; Gomez-Perez, S. L.; Odoms-Young, A.; Van Horn, L.; Christoffel, K. Kaufer; Dyer, A. R.

    2012-01-01

    This pilot study tested the feasibility of Family-Based Hip-Hop to Health, a school-based obesity prevention intervention for 3–5 year old Latino children and their parents, and estimated its effectiveness in producing smaller average changes in body mass index at one year follow-up. Four Head Start preschools administered through the Chicago Public Schools were randomly assigned to receive a Family-Based Intervention (FBI) or a General Health intervention (GHI). Parents signed consent forms for 147 of the 157 children enrolled. Both the school-based and family-based components of the intervention were feasible, but attendance for the parent intervention sessions was low. Contrary to expectations, a downtrend in BMI Z score was observed in both the intervention and control groups. While the data reflect a downward trend in obesity among these young Hispanic children, obesity rates remained higher at one-year follow-up (15%) than those reported by the National Health and Nutrition Examination Survey (2009–2010) for 2–5 year old children (12.1%). Developing evidence-based strategies for obesity prevention among Hispanic families remains a challenge. PMID:23532990

  8. Pyrogenic organic matter production from wildfires: a missing sink in the global carbon cycle.

    PubMed

    Santín, Cristina; Doerr, Stefan H; Preston, Caroline M; González-Rodríguez, Gil

    2015-04-01

    Wildfires release substantial quantities of carbon (C) into the atmosphere but they also convert part of the burnt biomass into pyrogenic organic matter (PyOM). This is richer in C and, overall, more resistant to environmental degradation than the original biomass, and, therefore, PyOM production is an efficient mechanism for C sequestration. The magnitude of this C sink, however, remains poorly quantified, and current production estimates, which suggest that ~1-5% of the C affected by fire is converted to PyOM, are based on incomplete inventories. Here, we quantify, for the first time, the complete range of PyOM components found in-situ immediately after a typical boreal forest fire. We utilized an experimental high-intensity crown fire in a jack pine forest (Pinus banksiana) and carried out a detailed pre- and postfire inventory and quantification of all fuel components, and the PyOM (i.e., all visually charred, blackened materials) produced in each of them. Our results show that, overall, 27.6% of the C affected by fire was retained in PyOM (4.8 ± 0.8 t C ha(-1)), rather than emitted to the atmosphere (12.6 ± 4.5 t C ha(-1)). The conversion rates varied substantially between fuel components. For down wood and bark, over half of the C affected was converted to PyOM, whereas for forest floor it was only one quarter, and less than a tenth for needles. If the overall conversion rate found here were applicable to boreal wildfire in general, it would translate into a PyOM production of ~100 Tg C yr(-1) by wildfire in the global boreal regions, more than five times the amount estimated previously. Our findings suggest that PyOM production from boreal wildfires, and potentially also from other fire-prone ecosystems, may have been underestimated and that its quantitative importance as a C sink warrants its inclusion in the global C budget estimates. © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  9. Pyrogenic organic matter production from wildfires: a missing sink in the global carbon cycle

    PubMed Central

    Santín, Cristina; Doerr, Stefan H; Preston, Caroline M; González-Rodríguez, Gil

    2015-01-01

    Wildfires release substantial quantities of carbon (C) into the atmosphere but they also convert part of the burnt biomass into pyrogenic organic matter (PyOM). This is richer in C and, overall, more resistant to environmental degradation than the original biomass, and, therefore, PyOM production is an efficient mechanism for C sequestration. The magnitude of this C sink, however, remains poorly quantified, and current production estimates, which suggest that ∽1-5% of the C affected by fire is converted to PyOM, are based on incomplete inventories. Here, we quantify, for the first time, the complete range of PyOM components found in-situ immediately after a typical boreal forest fire. We utilized an experimental high-intensity crown fire in a jack pine forest (Pinus banksiana) and carried out a detailed pre- and postfire inventory and quantification of all fuel components, and the PyOM (i.e., all visually charred, blackened materials) produced in each of them. Our results show that, overall, 27.6% of the C affected by fire was retained in PyOM (4.8 ± 0.8 t C ha−1), rather than emitted to the atmosphere (12.6 ± 4.5 t C ha−1). The conversion rates varied substantially between fuel components. For down wood and bark, over half of the C affected was converted to PyOM, whereas for forest floor it was only one quarter, and less than a tenth for needles. If the overall conversion rate found here were applicable to boreal wildfire in general, it would translate into a PyOM production of ∽100 Tg C yr−1 by wildfire in the global boreal regions, more than five times the amount estimated previously. Our findings suggest that PyOM production from boreal wildfires, and potentially also from other fire-prone ecosystems, may have been underestimated and that its quantitative importance as a C sink warrants its inclusion in the global C budget estimates. PMID:25378275

  10. Correlation, Cost Risk, and Geometry

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1992-01-01

    The geometric viewpoint identifies the choice of a correlation matrix for the simulation of cost risk with the pairwise choice of data vectors corresponding to the parameters used to obtain cost risk. The correlation coefficient is the cosine of the angle between the data vectors after translation to an origin at the mean and normalization for magnitude. Thus correlation is equivalent to expressing the data in terms of a non orthogonal basis. To understand the many resulting phenomena requires the use of the tensor concept of raising the index to transform the measured and observed covariant components into contravariant components before vector addition can be applied. The geometric viewpoint also demonstrates that correlation and covariance are geometric properties, as opposed to purely statistical properties, of the variates. Thus, variates from different distributions may be correlated, as desired, after selection from independent distributions. By determining the principal components of the correlation matrix, variates with the desired mean, magnitude, and correlation can be generated through linear transforms which include the eigenvalues and the eigenvectors of the correlation matrix. The conversion of the data to a non orthogonal basis uses a compound linear transformation which distorts or stretches the data space. Hence, the correlated data does not have the same properties as the uncorrelated data used to generate it. This phenomena is responsible for seemingly strange observations such as the fact that the marginal distributions of the correlated data can be quite different from the distributions used to generate the data. The joint effect of statistical distributions and correlation remains a fertile area for further research. In terms of application to cost estimating, the geometric approach demonstrates that the estimator must have data and must understand that data in order to properly choose the correlation matrix appropriate for a given estimate. There is a general feeling by employers and managers that the field of cost requires little technical or mathematical background. Contrary to that opinion, this paper demonstrates that a background in mathematics equivalent to that needed for typical engineering and scientific disciplines at the masters or doctorate level is appropriate within the field of cost risk.

  11. Weight estimation techniques for composite airplanes in general aviation industry

    NASA Technical Reports Server (NTRS)

    Paramasivam, T.; Horn, W. J.; Ritter, J.

    1986-01-01

    Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.

  12. Web-Based Genome-Wide Association Study Identifies Two Novel Loci and a Substantial Genetic Component for Parkinson's Disease

    PubMed Central

    Do, Chuong B.; Tung, Joyce Y.; Dorfman, Elizabeth; Kiefer, Amy K.; Drabant, Emily M.; Francke, Uta; Mountain, Joanna L.; Goldman, Samuel M.; Tanner, Caroline M.; Langston, J. William; Wojcicki, Anne; Eriksson, Nicholas

    2011-01-01

    Although the causes of Parkinson's disease (PD) are thought to be primarily environmental, recent studies suggest that a number of genes influence susceptibility. Using targeted case recruitment and online survey instruments, we conducted the largest case-control genome-wide association study (GWAS) of PD based on a single collection of individuals to date (3,426 cases and 29,624 controls). We discovered two novel, genome-wide significant associations with PD–rs6812193 near SCARB2 (, ) and rs11868035 near SREBF1/RAI1 (, )—both replicated in an independent cohort. We also replicated 20 previously discovered genetic associations (including LRRK2, GBA, SNCA, MAPT, GAK, and the HLA region), providing support for our novel study design. Relying on a recently proposed method based on genome-wide sharing estimates between distantly related individuals, we estimated the heritability of PD to be at least 0.27. Finally, using sparse regression techniques, we constructed predictive models that account for 6%–7% of the total variance in liability and that suggest the presence of true associations just beyond genome-wide significance, as confirmed through both internal and external cross-validation. These results indicate a substantial, but by no means total, contribution of genetics underlying susceptibility to both early-onset and late-onset PD, suggesting that, despite the novel associations discovered here and elsewhere, the majority of the genetic component for Parkinson's disease remains to be discovered. PMID:21738487

  13. Incorporating Neighborhood Choice in a Model of Neighborhood Effects on Income.

    PubMed

    van Ham, Maarten; Boschman, Sanne; Vogel, Matt

    2018-05-09

    Studies of neighborhood effects often attempt to identify causal effects of neighborhood characteristics on individual outcomes, such as income, education, employment, and health. However, selection looms large in this line of research, and it has been argued that estimates of neighborhood effects are biased because people nonrandomly select into neighborhoods based on their preferences, income, and the availability of alternative housing. We propose a two-step framework to disentangle selection processes in the relationship between neighborhood deprivation and earnings. We model neighborhood selection using a conditional logit model, from which we derive correction terms. Driven by the recognition that most households prefer certain types of neighborhoods rather than specific areas, we employ a principle components analysis to reduce these terms into eight correction components. We use these to adjust parameter estimates from a model of subsequent neighborhood effects on individual income for the unequal probability that a household chooses to live in a particular type of neighborhood. We apply this technique to administrative data from the Netherlands. After we adjust for the differential sorting of households into certain types of neighborhoods, the effect of neighborhood income on individual income diminishes but remains significant. These results further emphasize that researchers need to be attuned to the role of selection bias when assessing the role of neighborhood effects on individual outcomes. Perhaps more importantly, the persistent effect of neighborhood deprivation on subsequent earnings suggests that neighborhood effects reflect more than the shared characteristics of neighborhood residents: place of residence partially determines economic well-being.

  14. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  15. 14 CFR 25.1711 - Component identification: EWIS.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rules, by operating rules, or as a result of the assessment required by § 25.1709, EWIS components...) of this section must remain legible throughout the expected service life of the EWIS component. (d... adverse effect on the performance of that component throughout its expected service life. (e...

  16. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  17. An employee total health management-based survey of Iowa employers.

    PubMed

    Merchant, James A; Lind, David P; Kelly, Kevin M; Hall, Jennifer L

    2013-12-01

    To implement an Employee Total Health Management (ETHM) model-based questionnaire and provide estimates of model program elements among a statewide sample of Iowa employers. Survey a stratified random sample of Iowa employers, and characterize and estimate employer participation in ETHM program elements. Iowa employers are implementing less than 30% of all 12 components of ETHM, with the exception of occupational safety and health (46.6%) and workers' compensation insurance coverage (89.2%), but intend modest expansion of all components in the coming year. The ETHM questionnaire-based survey provides estimates of progress Iowa employers are making toward implementing components of Total Worker Health programs.

  18. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  19. Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior

    NASA Astrophysics Data System (ADS)

    Dumitru, Mircea; Djafari, Ali-Mohammad

    2015-01-01

    The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.

  20. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Beyond SaGMRotI: Conversion to SaArb, SaSN, and SaMaxRot

    USGS Publications Warehouse

    Watson-Lamprey, J. A.; Boore, D.M.

    2007-01-01

    In the seismic design of structures, estimates of design forces are usually provided to the engineer in the form of elastic response spectra. Predictive equations for elastic response spectra are derived from empirical recordings of ground motion. The geometric mean of the two orthogonal horizontal components of motion is often used as the response value in these predictive equations, although it is not necessarily the most relevant estimate of forces within the structure. For some applications it is desirable to estimate the response value on a randomly chosen single component of ground motion, and in other applications the maximum response in a single direction is required. We give adjustment factors that allow converting the predictions of geometric-mean ground-motion predictions into either of these other two measures of seismic ground-motion intensity. In addition, we investigate the relation of the strike-normal component of ground motion to the maximum response values. We show that the strike-normal component of ground motion seldom corresponds to the maximum horizontal-component response value (in particular, at distances greater than about 3 km from faults), and that focusing on this case in exclusion of others can result in the underestimation of the maximum component. This research provides estimates of the maximum response value of a single component for all cases, not just near-fault strike-normal components. We provide modification factors that can be used to convert predictions of ground motions in terms of the geometric mean to the maximum spectral acceleration (SaMaxRot) and the random component of spectral acceleration (SaArb). Included are modification factors for both the mean and the aleatory standard deviation of the logarithm of the motions.

  2. A comparison of modelling techniques used to characterise oxygen uptake kinetics during the on-transient of exercise.

    PubMed

    Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A

    2001-09-01

    We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.

  3. Synthesis and Assimilation Systems - Essential Adjuncts to the Global Ocean Observing System

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Balmaseda, Magdalena; Awaji, Toshiyuki; Barnier, Bernard; Behringer, David; Bell, Mike; Bourassa, Mark; Brasseur, Pierre; Breivik, Lars-Anders; Carton, James; hide

    2009-01-01

    Ocean assimilation systems synthesize diverse in situ and satellite data streams into four-dimensional state estimates by combining the various observations with the model. Assimilation is particularly important for the ocean where subsurface observations, even today, are sparse and intermittent compared with the scales needed to represent ocean variability and where satellites only sense the surface. Developments in assimilation and in the observing system have advanced our understanding and prediction of ocean variations at mesoscale and climate scales. Use of these systems for assessing the observing system helps identify the strengths of each observation type. Results indicate that the ocean remains under-sampled and that further improvements in the observing system are needed. Prospects for future advances lie in improved models and better estimates of error statistics for both models and observations. Future developments will be increasingly towards consistent analyses across components of the Earth system. However, even today ocean synthesis and assimilation systems are providing products that are useful for many applications and should be considered an integral part of the global ocean observing and information system.

  4. Doppler-shift compensation in the Taiwanese leaf-nosed bat (Hipposideros terasensis) recorded with a telemetry microphone system during flight

    NASA Astrophysics Data System (ADS)

    Hiryu, Shizuko; Katsura, Koji; Lin, Liang-Kong; Riquimaroux, Hiroshi; Watanabe, Yoshiaki

    2005-12-01

    Biosonar behavior was examined in Taiwanese leaf-nosed bats (Hipposideros terasensis; CF-FM bats) during flight. Echolocation sounds were recorded using a telemetry microphone mounted on the bat's head. Flight speed and three-dimensional trajectory of the bat were reconstructed from images taken with a dual high-speed video camera system. Bats were observed to change the intensity and emission rate of pulses depending on the distance from the landing site. Frequencies of the dominant second harmonic constant frequency component (CF2) of calls estimated from the bats' flight speed agreed strongly with observed values. Taiwanese leaf-nosed bats changed CF2 frequencies depending on flight speed, which caused the CF2 frequencies of the Doppler-shifted echoes to remain constant. Pulse frequencies were also estimated using echoes returning directly ahead of the bat and from its sides for two different flight conditions: landing and U-turn. Bats in flight may periodically alter their attended angles from the front to the side when emitting echolocation pulses.

  5. Traffic effects on bird counts on North American Breeding Bird Survey routes

    USGS Publications Warehouse

    Griffith, Emily H.; Sauer, John R.; Royle, J. Andrew

    2010-01-01

    The North American Breeding Bird Survey (BBS) is an annual roadside survey used to estimate population change in >420 species of birds that breed in North America. Roadside sampling has been criticized, in part because traffic noise can interfere with bird counts. Since 1997, data have been collected on the numbers of vehicles that pass during counts at each stop. We assessed the effect of traffic by modeling total vehicles as a covariate of counts in hierarchical Poisson regression models used to estimate population change. We selected species for analysis that represent birds detected at low and high abundance and birds with songs of low and high frequencies. Increases in vehicle counts were associated with decreases in bird counts in most of the species examined. The size and direction of these effects remained relatively constant between two alternative models that we analyzed. Although this analysis indicated only a small effect of incorporating traffic effects when modeling roadside counts of birds, we suggest that continued evaluation of changes in traffic at BBS stops should be a component of future BBS analyses.

  6. Dynamic Strain Measurements on Automotive and Aeronautic Composite Components by Means of Embedded Fiber Bragg Grating Sensors

    PubMed Central

    Lamberti, Alfredo; Chiesura, Gabriele; Luyckx, Geert; Degrieck, Joris; Kaufmann, Markus; Vanlanduit, Steve

    2015-01-01

    The measurement of the internal deformations occurring in real-life composite components is a very challenging task, especially for those components that are rather difficult to access. Optical fiber sensors can overcome such a problem, since they can be embedded in the composite materials and serve as in situ sensors. In this article, embedded optical fiber Bragg grating (FBG) sensors are used to analyze the vibration characteristics of two real-life composite components. The first component is a carbon fiber-reinforced polymer automotive control arm; the second is a glass fiber-reinforced polymer aeronautic hinge arm. The modal parameters of both components were estimated by processing the FBG signals with two interrogation techniques: the maximum detection and fast phase correlation algorithms were employed for the demodulation of the FBG signals; the Peak-Picking and PolyMax techniques were instead used for the parameter estimation. To validate the FBG outcomes, reference measurements were performed by means of a laser Doppler vibrometer. The analysis of the results showed that the FBG sensing capabilities were enhanced when the recently-introduced fast phase correlation algorithm was combined with the state-of-the-art PolyMax estimator curve fitting method. In this case, the FBGs provided the most accurate results, i.e., it was possible to fully characterize the vibration behavior of both composite components. When using more traditional interrogation algorithms (maximum detection) and modal parameter estimation techniques (Peak-Picking), some of the modes were not successfully identified. PMID:26516854

  7. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  8. Short communication: Estimates of genetic parameters for dairy fertility in New Zealand.

    PubMed

    Amer, P R; Stachowicz, K; Jenkins, G M; Meier, S

    2016-10-01

    Reproductive performance of dairy cows in a seasonal calving system is especially important as cows are required to achieve a 365-d calving interval. Prior research with a small data set has identified that the genetic evaluation model for fertility could be enhanced by replacing the binary calving rate trait (CR42), which gives the probability of a cow calving within the first 42d since the planned start of calving at second, third, and fourth calving, with a continuous version, calving season day (CSD), including a heifer calving season day trait expressed at first calving, removing milk yield, retaining a probability of mating trait (PM21) which gives the probability of a cow being mated within the first 21d from the planned start of mating, and first lactation body condition score (BCS), and including gestation length (GL). The aim of this study was to estimate genetic parameters for the proposed new model using a larger data set and compare these with parameters used in the current system. Heritability estimates for CSD and PM21 ranged from 0.013 to 0.019 and from 0.031 to 0.058, respectively. For the 2 traits that correspond with the ones used in the current genetic evaluation system (mating trait, PM21 and BCS) genetic correlations were lower in this study compared with previous estimates. Genetic correlations between CSD and PM21 across different parities were also lower than the correlations between CR42 and PM21 reported previously. The genetic correlation between heifer CSD and CSD in first parity was 0.66. Estimates of genetic correlations of BCS with CSD were higher than those with PM21. For GL, direct heritability was estimated to be 0.67, maternal heritability was 0.11, and maternal repeatability was 0.22. Direct GL had moderate to high and favorable genetic correlations with evaluated fertility traits, whereas corresponding residual correlations remain low, which makes GL a useful candidate predictor trait for fertility in a multiple trait evaluation. The superiority of direct GL genetic component over the maternal GL component for predicting fertility was demonstrated. Future work planned in this area includes the implementation and testing of this new model on national fertility data. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Polygenic risk score and heritability estimates reveals a genetic relationship between ASD and OCD.

    PubMed

    Guo, W; Samuels, J F; Wang, Y; Cao, H; Ritter, M; Nestadt, P S; Krasnow, J; Greenberg, B D; Fyer, A J; McCracken, J T; Geller, D A; Murphy, D L; Knowles, J A; Grados, M A; Riddle, M A; Rasmussen, S A; McLaughlin, N C; Nurmi, E L; Askland, K D; Cullen, B A; Piacentini, J; Pauls, D L; Bienvenu, O J; Stewart, S E; Goes, F S; Maher, B; Pulver, A E; Valle, D; Mattheisen, M; Qian, J; Nestadt, G; Shugart, Y Y

    2017-07-01

    Obsessive-compulsive disorder (OCD) and Autism spectrum disorder (ASD) are both highly heritable neurodevelopmental disorders that conceivably share genetic risk factors. However, the underlying genetic determinants remain largely unknown. In this work, the authors describe a combined genome-wide association study (GWAS) of ASD and OCD. The OCD dataset includes 2998 individuals in nuclear families. The ASD dataset includes 6898 individuals in case-parents trios. GWAS summary statistics were examined for potential enrichment of functional variants associated with gene expression levels in brain regions. The top ranked SNP is rs4785741 (chromosome 16) with P value=6.9×10 -7 in our re-analysis. Polygenic risk score analyses were conducted to investigate the genetic relationship within and across the two disorders. These analyses identified a significant polygenic component of ASD, predicting 0.11% of the phenotypic variance in an independent OCD data set. In addition, we examined the genomic architecture of ASD and OCD by estimating heritability on different chromosomes and different allele frequencies, analyzing genome-wide common variant data by using the Genome-wide Complex Trait Analysis (GCTA) program. The estimated global heritability of OCD is 0.427 (se=0.093) and 0.174 (se=0.053) for ASD in these imputed data. Published by Elsevier B.V.

  10. Agriculture is a major source of NOx pollution in California

    PubMed Central

    Almaraz, Maya; Bai, Edith; Wang, Chao; Trousdell, Justin; Conley, Stephen; Faloona, Ian; Houlton, Benjamin Z.

    2018-01-01

    Nitrogen oxides (NOx = NO + NO2) are a primary component of air pollution—a leading cause of premature death in humans and biodiversity declines worldwide. Although regulatory policies in California have successfully limited transportation sources of NOx pollution, several of the United States’ worst–air quality districts remain in rural regions of the state. Site-based findings suggest that NOx emissions from California’s agricultural soils could contribute to air quality issues; however, a statewide estimate is hitherto lacking. We show that agricultural soils are a dominant source of NOx pollution in California, with especially high soil NOx emissions from the state’s Central Valley region. We base our conclusion on two independent approaches: (i) a bottom-up spatial model of soil NOx emissions and (ii) top-down airborne observations of atmospheric NOx concentrations over the San Joaquin Valley. These approaches point to a large, overlooked NOx source from cropland soil, which is estimated to increase the NOx budget by 20 to 51%. These estimates are consistent with previous studies of point-scale measurements of NOx emissions from the soil. Our results highlight opportunities to limit NOx emissions from agriculture by investing in management practices that will bring co-benefits to the economy, ecosystems, and human health in rural areas of California. PMID:29399630

  11. Emergency general surgery: definition and estimated burden of disease.

    PubMed

    Shafi, Shahid; Aboutanos, Michel B; Agarwal, Suresh; Brown, Carlos V R; Crandall, Marie; Feliciano, David V; Guillamondegui, Oscar; Haider, Adil; Inaba, Kenji; Osler, Turner M; Ross, Steven; Rozycki, Grace S; Tominaga, Gail T

    2013-04-01

    Acute care surgery encompasses trauma, surgical critical care, and emergency general surgery (EGS). While the first two components are well defined, the scope of EGS practice remains unclear. This article describes the work of the American Association for the Surgery of Trauma to define EGS. A total of 621 unique International Classification of Diseases-9th Rev. (ICD-9) diagnosis codes were identified using billing data (calendar year 2011) from seven large academic medical centers that practice EGS. A modified Delphi methodology was used by the American Association for the Surgery of Trauma Committee on Severity Assessment and Patient Outcomes to review these codes and achieve consensus on the definition of primary EGS diagnosis codes. National Inpatient Sample data from 2009 were used to develop a national estimate of EGS burden of disease. Several unique ICD-9 codes were identified as primary EGS diagnoses. These encompass a wide spectrum of general surgery practice, including upper and lower gastrointestinal tract, hepatobiliary and pancreatic disease, soft tissue infections, and hernias. National Inpatient Sample estimates revealed over 4 million inpatient encounters nationally in 2009 for EGS diseases. This article provides the first list of ICD-9 diagnoses codes that define the scope of EGS based on current clinical practices. These findings have wide implications for EGS workforce training, access to care, and research.

  12. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  13. Reducing the Knowledge Tracing Space

    ERIC Educational Resources Information Center

    Ritter, Steven; Harris, Thomas K.; Nixon, Tristan; Dickison, Daniel; Murray, R. Charles; Towle, Brendon

    2009-01-01

    In Cognitive Tutors, student skill is represented by estimates of student knowledge on various knowledge components. The estimate for each knowledge component is based on a four-parameter model developed by Corbett and Anderson [Nb]. In this paper, we investigate the nature of the parameter space defined by these four parameters by modeling data…

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guiriec, S.; Kouveliotou, C.; Hartmann, D. H.

    The origin of prompt emission from gamma-ray bursts (GRBs) remains to be an open question. Correlated prompt optical and γ -ray emission observed in a handful of GRBs strongly suggests a common emission region, but failure to adequately fit the broadband GRB spectrum prompted the hypothesis of different emission mechanisms for the low- and high-energy radiations. We demonstrate that our multi-component model for GRB γ -ray prompt emission provides an excellent fit to GRB 110205A from optical to γ -ray energies. Our results show that the optical and highest γ -ray emissions have the same spatial and spectral origin, whichmore » is different from the bulk of the X- and softest γ -ray radiation. Finally, our accurate redshift estimate for GRB 110205A demonstrates promise for using GRBs as cosmological standard candles.« less

  15. Integrating Evolutionary Game Theory into Mechanistic Genotype-Phenotype Mapping.

    PubMed

    Zhu, Xuli; Jiang, Libo; Ye, Meixia; Sun, Lidan; Gragnoli, Claudia; Wu, Rongling

    2016-05-01

    Natural selection has shaped the evolution of organisms toward optimizing their structural and functional design. However, how this universal principle can enhance genotype-phenotype mapping of quantitative traits has remained unexplored. Here we show that the integration of this principle and functional mapping through evolutionary game theory gains new insight into the genetic architecture of complex traits. By viewing phenotype formation as an evolutionary system, we formulate mathematical equations to model the ecological mechanisms that drive the interaction and coordination of its constituent components toward population dynamics and stability. Functional mapping provides a procedure for estimating the genetic parameters that specify the dynamic relationship of competition and cooperation and predicting how genes mediate the evolution of this relationship during trait formation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  17. Quantifying components of the hydrologic cycle in Virginia using chemical hydrograph separation and multiple regression analysis

    USGS Publications Warehouse

    Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.

    2012-01-01

    This study by the U.S. Geological Survey, prepared in cooperation with the Virginia Department of Environmental Quality, quantifies the components of the hydrologic cycle across the Commonwealth of Virginia. Long-term, mean fluxes were calculated for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971–2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. The base-flow proportion for the 48 watersheds averaged 72 percent using specific conductance, a value that was substantially higher than the 61 percent average calculated using a graphical-separation technique (the USGS program PART). Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia.

  18. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  19. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  20. TYC 5780-308-1 Discovery of Stellar Duplicity During Asteroidal Occultation by (834) Burnhamia

    NASA Astrophysics Data System (ADS)

    Timerson, Brad; George, T.; Blank, Ted; Maley, Paul; Messner, Steve; Moore, John

    2018-04-01

    An occultation of TYC 5780-308-1 by the asteroid (834) Burnhamia on August 23, 2017 (UT) showed this star to be a double star. Both components of the double star were occulted as recorded by four observers. The separation of the two components is 0.0143 ± 0.0004 arcseconds at a position angle of 73.8 ± 2.7 degrees. The magnitude of the primary component is estimated to be 9.8 +/- 0.03 (Tycho2 VT). The magnitude of the secondary component is estimated to be 9.92 +/- 0.03 (Tycho2 VT).

  1. A methodology for probabilistic remaining creep life assessment of gas turbine components

    NASA Astrophysics Data System (ADS)

    Liu, Zhimin

    Certain gas turbine components operate in harsh environments and various mechanisms may lead to component failure. It is common practice to use remaining life assessments to help operators schedule maintenance and component replacements. Creep is a major failure mechanisms that affect the remaining life assessment, and the resulting life consumption of a component is highly sensitive to variations in the material stresses and temperatures, which fluctuate significantly due to the changes in real operating conditions. In addition, variations in material properties and geometry will result in changes in creep life consumption rate. The traditional method used for remaining life assessment assumes a set of fixed operating conditions at all times, and it fails to capture the variations in operating conditions. This translates into a significant loss of accuracy and unnecessary high maintenance and replacement cost. A new method that captures these variations described above and improves the prediction accuracy of remaining life is developed. First, a metamodel is built to approximate the relationship between variables (operating conditions, material properties, geometry, etc.) and a creep response. The metamodel is developed using Response Surface Method/Design of Experiments methodology. Design of Experiments is an efficient sampling method, and for each sampling point a set of finite element analyses are used to compute the corresponding response value. Next, a low order polynomial Response Surface Equation (RSE) is used to fit these values. Four techniques are suggested to dramatically reduce computational effort, and to increase the accuracy of the RSE: smart meshing technique, automatic geometry parameterization, screening test and regional RSE refinement. The RSEs, along with a probabilistic method and a life fraction model are used to compute current damage accumulation and remaining life. By capturing the variations mentioned above, the new method results in much better accuracy than that available using the traditional method. After further development and proper verification the method should bring significant savings by reducing the number of inspections and deferring part replacement.

  2. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  3. Sea-level rise impacts on the tides of the European Shelf

    NASA Astrophysics Data System (ADS)

    Idier, Déborah; Paris, François; Cozannet, Gonéri Le; Boulahya, Faiza; Dumas, Franck

    2017-04-01

    Sea-level rise (SLR) can modify not only total water levels, but also tidal dynamics. Several studies have investigated the effects of SLR on the tides of the western European continental shelf (mainly the M2 component). We further investigate this issue using a modelling-based approach, considering uniform SLR scenarios from -0.25 m to +10 m above present-day sea level. Assuming that coastal defenses are constructed along present-day shorelines, the patterns of change in high tide levels (annual maximum water level) are spatially similar, regardless of the magnitude of sea-level rise (i.e., the sign of the change remains the same, regardless of the SLR scenario) over most of the area (70%). Notable increases in high tide levels occur especially in the northern Irish Sea, the southern part of the North Sea and the German Bight, and decreases occur mainly in the western English Channel. These changes are generally proportional to SLR, as long as SLR remains smaller than 2 m. Depending on the location, they can account for +/-15% of regional SLR. High tide levels and the M2 component exhibit slightly different patterns. Analysis of the 12 largest tidal components highlights the need to take into account at least the M2, S2, N2, M4, MS4 and MN4 components when investigating the effects of SLR on tides. Changes in high tide levels are much less proportional to SLR when flooding is allowed, in particular in the German Bight. However, some areas (e.g., the English Channel) are not very sensitive to this option, meaning that the effects of SLR would be predictable in these areas, even if future coastal defense strategies are ignored. Physically, SLR-induced tidal changes result from the competition between reductions in bed friction damping, changes in resonance properties and increased reflection at the coast, i.e., local and non-local processes. A preliminary estimate of tidal changes by 2100 under a plausible non-uniform SLR scenario (using the RCP4.5 scenario) is provided. Though the changes display similar patterns, the high water levels appear to be sensitive to the non-uniformity of SLR.

  4. An Employee Total Health Management–Based Survey of Iowa Employers

    PubMed Central

    Merchant, James A.; Lind, David P.; Kelly, Kevin M.; Hall, Jennifer L.

    2015-01-01

    Objective To implement an Employee Total Health Management (ETHM) model-based questionnaire and provide estimates of model program elements among a statewide sample of Iowa employers. Methods Survey a stratified random sample of Iowa employers, characterize and estimate employer participation in ETHM program elements Results Iowa employers are implementing under 30% of all 12 components of ETHM, with the exception of occupational safety and health (46.6%) and worker compensation insurance coverage (89.2%), but intend modest expansion of all components in the coming year. Conclusions The Employee Total Health Management questionnaire-based survey provides estimates of progress Iowa employers are making toward implementing components of total worker health programs. PMID:24284757

  5. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  6. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  7. Estimating chronic disease rates in Canada: which population-wide denominator to use?

    PubMed

    Ellison, J; Nagamuthu, C; Vanderloo, S; McRae, B; Waters, C

    2016-10-01

    Chronic disease rates are produced from the Public Health Agency of Canada's Canadian Chronic Disease Surveillance System (CCDSS) using administrative health data from provincial/territorial health ministries. Denominators for these rates are based on estimates of populations derived from health insurance files. However, these data may not be accessible to all researchers. Another source for population size estimates is the Statistics Canada census. The purpose of our study was to calculate the major differences between the CCDSS and Statistics Canada's population denominators and to identify the sources or reasons for the potential differences between these data sources. We compared the 2009 denominators from the CCDSS and Statistics Canada. The CCDSS denominator was adjusted for the growth components (births, deaths, emigration and immigration) from Statistics Canada's census data. The unadjusted CCDSS denominator was 34 429 804, 3.2% higher than Statistics Canada's estimate of population in 2009. After the CCDSS denominator was adjusted for the growth components, the difference between the two estimates was reduced to 431 323 people, a difference of 1.3%. The CCDSS overestimates the population relative to Statistics Canada overall. The largest difference between the two estimates was from the migrant growth component, while the smallest was from the emigrant component. By using data descriptions by data source, researchers can make decisions about which population to use in their calculations of disease frequency.

  8. What Klein's "Semantic Gradient" Does and Does Not Really Show: Decomposing Stroop Interference into Task and Informational Conflict Components.

    PubMed

    Levin, Yulia; Tzelgov, Joseph

    2016-01-01

    The present study suggests that the idea that Stroop interference originates from multiple components may gain theoretically from integrating two independent frameworks. The first framework is represented by the well-known notion of "semantic gradient" of interference and the second one is the distinction between two types of conflict - the task and the informational conflict - giving rise to the interference (MacLeod and MacDonald, 2000; Goldfarb and Henik, 2007). The proposed integration led to the conclusion that two (i.e., orthographic and lexical components) of the four theoretically distinct components represent task conflict, and the other two (i.e., indirect and direct informational conflict components) represent informational conflict. The four components were independently estimated in a series of experiments. The results confirmed the contribution of task conflict (estimated by a robust orthographic component) and of informational conflict (estimated by a strong direct informational conflict component) to Stroop interference. However, the performed critical review of the relevant literature (see General Discussion), as well as the results of the experiments reported, showed that the other two components expressing each type of conflict (i.e., the lexical component of task conflict and the indirect informational conflict) were small and unstable. The present analysis refines our knowledge of the origins of Stroop interference by providing evidence that each type of conflict has its major and minor contributions. The implications for cognitive control of an automatic reading process are also discussed.

  9. What Klein’s “Semantic Gradient” Does and Does Not Really Show: Decomposing Stroop Interference into Task and Informational Conflict Components

    PubMed Central

    Levin, Yulia; Tzelgov, Joseph

    2016-01-01

    The present study suggests that the idea that Stroop interference originates from multiple components may gain theoretically from integrating two independent frameworks. The first framework is represented by the well-known notion of “semantic gradient” of interference and the second one is the distinction between two types of conflict – the task and the informational conflict – giving rise to the interference (MacLeod and MacDonald, 2000; Goldfarb and Henik, 2007). The proposed integration led to the conclusion that two (i.e., orthographic and lexical components) of the four theoretically distinct components represent task conflict, and the other two (i.e., indirect and direct informational conflict components) represent informational conflict. The four components were independently estimated in a series of experiments. The results confirmed the contribution of task conflict (estimated by a robust orthographic component) and of informational conflict (estimated by a strong direct informational conflict component) to Stroop interference. However, the performed critical review of the relevant literature (see General Discussion), as well as the results of the experiments reported, showed that the other two components expressing each type of conflict (i.e., the lexical component of task conflict and the indirect informational conflict) were small and unstable. The present analysis refines our knowledge of the origins of Stroop interference by providing evidence that each type of conflict has its major and minor contributions. The implications for cognitive control of an automatic reading process are also discussed. PMID:26955363

  10. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  11. An observational assessment of the influence of mesoscale and submesoscale heterogeneity on ocean biogeochemical reactions

    NASA Astrophysics Data System (ADS)

    Martin, Adrian P.; Lévy, Marina; van Gennip, Simon; Pardo, Silvia; Srokosz, Meric; Allen, John; Painter, Stuart C.; Pidcock, Roz

    2015-09-01

    Numerous observations demonstrate that considerable spatial variability exists in components of the marine planktonic ecosystem at the mesoscale and submesoscale (100 km-1 km). The causes and consequences of physical processes at these scales ("eddy advection") influencing biogeochemistry have received much attention. Less studied, the nonlinear nature of most ecological and biogeochemical interactions means that such spatial variability has consequences for regional estimates of processes including primary production and grazing, independent of the physical processes. This effect has been termed "eddy reactions." Models remain our most powerful tools for extrapolating hypotheses for biogeochemistry to global scales and to permit future projections. The spatial resolution of most climate and global biogeochemical models means that processes at the mesoscale and submesoscale are poorly resolved. Modeling work has previously suggested that the neglected eddy reactions may be almost as large as the mean field estimates in some cases. This study seeks to quantify the relative size of eddy and mean reactions observationally, using in situ and satellite data. For primary production, grazing, and zooplankton mortality the eddy reactions are between 7% and 15% of the mean reactions. These should be regarded as preliminary estimates to encourage further observational estimates and not taken as a justification for ignoring eddy reactions. Compared to modeling estimates, there are inconsistencies in the relative magnitude of eddy reactions and in correlations which are a major control on their magnitude. One possibility is that models exhibit much stronger spatial correlations than are found in reality, effectively amplifying the magnitude of eddy reactions.

  12. Observational estimation of radiative feedback to surface air temperature over Northern High Latitudes

    NASA Astrophysics Data System (ADS)

    Hwang, Jiwon; Choi, Yong-Sang; Kim, WonMoo; Su, Hui; Jiang, Jonathan H.

    2018-01-01

    The high-latitude climate system contains complicated, but largely veiled physical feedback processes. Climate predictions remain uncertain, especially for the Northern High Latitudes (NHL; north of 60°N), and observational constraint on climate modeling is vital. This study estimates local radiative feedbacks for NHL based on the CERES/Terra satellite observations during March 2000-November 2014. The local shortwave (SW) and longwave (LW) radiative feedback parameters are calculated from linear regression of radiative fluxes at the top of the atmosphere on surface air temperatures. These parameters are estimated by the de-seasonalization and 12-month moving average of the radiative fluxes over NHL. The estimated magnitudes of the SW and the LW radiative feedbacks in NHL are 1.88 ± 0.73 and 2.38 ± 0.59 W m-2 K-1, respectively. The parameters are further decomposed into individual feedback components associated with surface albedo, water vapor, lapse rate, and clouds, as a product of the change in climate variables from ERA-Interim reanalysis estimates and their pre-calculated radiative kernels. The results reveal the significant role of clouds in reducing the surface albedo feedback (1.13 ± 0.44 W m-2 K-1 in the cloud-free condition, and 0.49 ± 0.30 W m-2 K-1 in the all-sky condition), while the lapse rate feedback is predominant in LW radiation (1.33 ± 0.18 W m-2 K-1). However, a large portion of the local SW and LW radiative feedbacks were not simply explained by the sum of these individual feedbacks.

  13. Continuous inventories and the components of change

    Treesearch

    Frnacis A. Roesch

    2004-01-01

    The consequences of conducting a continuous inventory that utilizes measurements on overlapping temporal intervals of varying length on compatible estimation systems for the components of growth are explored. The time interpenetrating sample design of the USDA Forest Service Forest Inventory and Analysis Program is used as an example. I show why estimation of the...

  14. Event-based estimation of water budget components using the network of multi-sensor capacitance probes

    USDA-ARS?s Scientific Manuscript database

    A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...

  15. Temporal Treatment of a Thermal Response for Defect Depth Estimation

    NASA Technical Reports Server (NTRS)

    Plotnikov, Y. A.; Winfree, W. P.

    2004-01-01

    Transient thermography, which employs pulse surface heating of an inspected component followed by acquisition of the thermal decay stage, is gaining wider acceptance as a result of its remoteness and rapidness. Flaws in the component s material may induce a thermal contrast in surface thermograms. An important issue in transient thermography is estimating the depth of a subsurface flaw from the thermal response. This improves the quantitative ability of the thermal evaluation: from one scan it is possible to locate regions of anomalies in thickness (caused by corrosion) and estimate the implications of the flaw on the integrity of the structure. Our research focuses on thick composite aircraft components. A long square heating pulse and several minutes observation period are required to receive an adequate thermal response from such a component. Application of various time-related informative parameters of the thermal response for depth estimation is discussed. A three-dimensional finite difference model of heat propagation in solids in Cartesian coordinates is used to simulate the thermographic process. Typical physical properties of polymer graphite composites are assumed for the model.

  16. Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits

    PubMed Central

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.

    2013-01-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753

  17. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    PubMed

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  18. National Costs Of The Medical Liability System

    PubMed Central

    Mello, Michelle M.; Chandra, Amitabh; Gawande, Atul A.; Studdert, David M.

    2011-01-01

    Concerns about reducing the rate of growth of health expenditures have reignited interest in medical liability reforms and their potential to save money by reducing the practice of defensive medicine. It is not easy to estimate the costs of the medical liability system, however. This article identifies the various components of liability system costs, generates national estimates for each component, and discusses the level of evidence available to support the estimates. Overall annual medical liability system costs, including defensive medicine, are estimated to be $55.6 billion in 2008 dollars, or 2.4 percent of total health care spending. PMID:20820010

  19. Procedures for estimating confidence intervals for selected method performance parameters.

    PubMed

    McClure, F D; Lee, J K

    2001-01-01

    Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.

  20. Hydrogeologic setting and preliminary estimates of hydrologic components for Bull Run Lake and the Bull Run Lake drainage basin, Multnomah and Clackamas counties, Oregon

    USGS Publications Warehouse

    Snyder, Daniel T.; Brownell, Dorie L.

    1996-01-01

    Suggestions for further study include (1) evaluation of the surface-runoff component of inflow to the lake; (2) use of a cross-sectional ground-water flow model to estimate ground-water inflow, outflow, and storage; (3) additional data collection to reduce the uncertainties of the hydrologic components that have large relative uncertainties; and (4) determination of long-term trends for a wide range of climatic and hydrologic conditions.

  1. Autonomous Component Health Management with Failed Component Detection, Identification, and Avoidance

    NASA Technical Reports Server (NTRS)

    Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.

    2004-01-01

    This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.

  2. Estimating dead wood during national forest inventories: a review of inventory methodologies and suggestions for harmonization.

    PubMed

    Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran

    2009-10-01

    Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.

  3. Gas Composition Sensing Using Carbon Nanotube Arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing; Meyyappan, Meyya

    2012-01-01

    This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.

  4. Dynamic Strain Measurements on Automotive and Aeronautic Composite Components by Means of Embedded Fiber Bragg Grating Sensors.

    PubMed

    Lamberti, Alfredo; Chiesura, Gabriele; Luyckx, Geert; Degrieck, Joris; Kaufmann, Markus; Vanlanduit, Steve

    2015-10-26

    The measurement of the internal deformations occurring in real-life composite components is a very challenging task, especially for those components that are rather difficult to access. Optical fiber sensors can overcome such a problem, since they can be embedded in the composite materials and serve as in situ sensors. In this article, embedded optical fiber Bragg grating (FBG) sensors are used to analyze the vibration characteristics of two real-life composite components. The first component is a carbon fiber-reinforced polymer automotive control arm; the second is a glass fiber-reinforced polymer aeronautic hinge arm. The modal parameters of both components were estimated by processing the FBG signals with two interrogation techniques: the maximum detection and fast phase correlation algorithms were employed for the demodulation of the FBG signals; the Peak-Picking and PolyMax techniques were instead used for the parameter estimation. To validate the FBG outcomes, reference measurements were performed by means of a laser Doppler vibrometer. Sensors 2015, 15 27175 The analysis of the results showed that the FBG sensing capabilities were enhanced when the recently-introduced fast phase correlation algorithm was combined with the state-of-the-art PolyMax estimator curve fitting method. In this case, the FBGs provided the most accurate results, i.e. it was possible to fully characterize the vibration behavior of both composite components. When using more traditional interrogation algorithms (maximum detection) and modal parameter estimation techniques (Peak-Picking), some of the modes were not successfully identified.

  5. Development of Jet Noise Power Spectral Laws

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2011-01-01

    High-quality jet noise spectral data measured at the Aero-Acoustic Propulsion Laboratory (AAPL) at NASA Glenn is used to develop jet noise scaling laws. A FORTRAN algorithm was written that provides detailed spectral prediction of component jet noise at user-specified conditions. The model generates quick estimates of the jet mixing noise and the broadband shock-associated noise (BBSN) in single-stream, axis-symmetric jets within a wide range of nozzle operating conditions. Shock noise is emitted when supersonic jets exit a nozzle at imperfectly expanded conditions. A successful scaling of the BBSN allows for this noise component to be predicted in both convergent and convergent-divergent nozzles. Configurations considered in this study consisted of convergent and convergent- divergent nozzles. Velocity exponents for the jet mixing noise were evaluated as a function of observer angle and jet temperature. Similar intensity laws were developed for the broadband shock-associated noise in supersonic jets. A computer program called sJet was developed that provides a quick estimate of component noise in single-stream jets at a wide range of operating conditions. A number of features have been incorporated into the data bank and subsequent scaling in order to improve jet noise predictions. Measurements have been converted to a lossless format. Set points have been carefully selected to minimize the instability-related noise at small aft angles. Regression parameters have been scrutinized for error bounds at each angle. Screech-related amplification noise has been kept to a minimum to ensure that the velocity exponents for the jet mixing noise remain free of amplifications. A shock-noise-intensity scaling has been developed independent of the nozzle design point. The computer program provides detailed narrow-band spectral predictions for component noise (mixing noise and shock associated noise), as well as the total noise. Although the methodology is confined to single streams, efforts are underway to generate a data bank and algorithm applicable to dual-stream jets. Shock-associated noise in high-powered jets such as military aircraft can benefit from these predictions.

  6. Deep-Sea Biodiversity in the Mediterranean Sea: The Known, the Unknown, and the Unknowable

    PubMed Central

    Danovaro, Roberto; Company, Joan Batista; Corinaldesi, Cinzia; D'Onghia, Gianfranco; Galil, Bella; Gambi, Cristina; Gooday, Andrew J.; Lampadariou, Nikolaos; Luna, Gian Marco; Morigi, Caterina; Olu, Karine; Polymenakou, Paraskevi; Ramirez-Llodra, Eva; Sabbatini, Anna; Sardà, Francesc; Sibuet, Myriam; Tselepides, Anastasios

    2010-01-01

    Deep-sea ecosystems represent the largest biome of the global biosphere, but knowledge of their biodiversity is still scant. The Mediterranean basin has been proposed as a hot spot of terrestrial and coastal marine biodiversity but has been supposed to be impoverished of deep-sea species richness. We summarized all available information on benthic biodiversity (Prokaryotes, Foraminifera, Meiofauna, Macrofauna, and Megafauna) in different deep-sea ecosystems of the Mediterranean Sea (200 to more than 4,000 m depth), including open slopes, deep basins, canyons, cold seeps, seamounts, deep-water corals and deep-hypersaline anoxic basins and analyzed overall longitudinal and bathymetric patterns. We show that in contrast to what was expected from the sharp decrease in organic carbon fluxes and reduced faunal abundance, the deep-sea biodiversity of both the eastern and the western basins of the Mediterranean Sea is similarly high. All of the biodiversity components, except Bacteria and Archaea, displayed a decreasing pattern with increasing water depth, but to a different extent for each component. Unlike patterns observed for faunal abundance, highest negative values of the slopes of the biodiversity patterns were observed for Meiofauna, followed by Macrofauna and Megafauna. Comparison of the biodiversity associated with open slopes, deep basins, canyons, and deep-water corals showed that the deep basins were the least diverse. Rarefaction curves allowed us to estimate the expected number of species for each benthic component in different bathymetric ranges. A large fraction of exclusive species was associated with each specific habitat or ecosystem. Thus, each deep-sea ecosystem contributes significantly to overall biodiversity. From theoretical extrapolations we estimate that the overall deep-sea Mediterranean biodiversity (excluding prokaryotes) reaches approximately 2805 species of which about 66% is still undiscovered. Among the biotic components investigated (Prokaryotes excluded), most of the unknown species are within the phylum Nematoda, followed by Foraminifera, but an important fraction of macrofaunal and megafaunal species also remains unknown. Data reported here provide new insights into the patterns of biodiversity in the deep-sea Mediterranean and new clues for future investigations aimed at identifying the factors controlling and threatening deep-sea biodiversity. PMID:20689848

  7. Deep-sea biodiversity in the Mediterranean Sea: the known, the unknown, and the unknowable.

    PubMed

    Danovaro, Roberto; Company, Joan Batista; Corinaldesi, Cinzia; D'Onghia, Gianfranco; Galil, Bella; Gambi, Cristina; Gooday, Andrew J; Lampadariou, Nikolaos; Luna, Gian Marco; Morigi, Caterina; Olu, Karine; Polymenakou, Paraskevi; Ramirez-Llodra, Eva; Sabbatini, Anna; Sardà, Francesc; Sibuet, Myriam; Tselepides, Anastasios

    2010-08-02

    Deep-sea ecosystems represent the largest biome of the global biosphere, but knowledge of their biodiversity is still scant. The Mediterranean basin has been proposed as a hot spot of terrestrial and coastal marine biodiversity but has been supposed to be impoverished of deep-sea species richness. We summarized all available information on benthic biodiversity (Prokaryotes, Foraminifera, Meiofauna, Macrofauna, and Megafauna) in different deep-sea ecosystems of the Mediterranean Sea (200 to more than 4,000 m depth), including open slopes, deep basins, canyons, cold seeps, seamounts, deep-water corals and deep-hypersaline anoxic basins and analyzed overall longitudinal and bathymetric patterns. We show that in contrast to what was expected from the sharp decrease in organic carbon fluxes and reduced faunal abundance, the deep-sea biodiversity of both the eastern and the western basins of the Mediterranean Sea is similarly high. All of the biodiversity components, except Bacteria and Archaea, displayed a decreasing pattern with increasing water depth, but to a different extent for each component. Unlike patterns observed for faunal abundance, highest negative values of the slopes of the biodiversity patterns were observed for Meiofauna, followed by Macrofauna and Megafauna. Comparison of the biodiversity associated with open slopes, deep basins, canyons, and deep-water corals showed that the deep basins were the least diverse. Rarefaction curves allowed us to estimate the expected number of species for each benthic component in different bathymetric ranges. A large fraction of exclusive species was associated with each specific habitat or ecosystem. Thus, each deep-sea ecosystem contributes significantly to overall biodiversity. From theoretical extrapolations we estimate that the overall deep-sea Mediterranean biodiversity (excluding prokaryotes) reaches approximately 2805 species of which about 66% is still undiscovered. Among the biotic components investigated (Prokaryotes excluded), most of the unknown species are within the phylum Nematoda, followed by Foraminifera, but an important fraction of macrofaunal and megafaunal species also remains unknown. Data reported here provide new insights into the patterns of biodiversity in the deep-sea Mediterranean and new clues for future investigations aimed at identifying the factors controlling and threatening deep-sea biodiversity.

  8. Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence

    PubMed Central

    Claggett, B.; Lagakos, S.W.; Wang, R.

    2011-01-01

    Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904

  9. Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.

    PubMed

    Claggett, B; Lagakos, S W; Wang, R

    2012-03-01

    Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.

  10. Probiotics and Other Key Determinants of Dietary Oxalate Absorption1

    PubMed Central

    Liebman, Michael; Al-Wahsh, Ismail A.

    2011-01-01

    Oxalate is a common component of many foods of plant origin, including nuts, fruits, vegetables, grains, and legumes, and is typically present as a salt of oxalic acid. Because virtually all absorbed oxalic acid is excreted in the urine and hyperoxaluria is known to be a considerable risk factor for urolithiasis, it is important to understand the factors that have the potential to alter the efficiency of oxalate absorption. Oxalate bioavailability, a term that has been used to refer to that portion of food-derived oxalate that is absorbed from the gastrointestinal tract (GIT), is estimated to range from 2 to 15% for different foods. Oxalate bioavailability appears to be decreased by concomitant food ingestion due to interactions between oxalate and coingested food components that likely result in less oxalic acid remaining in a soluble form. There is a lack of consensus in the literature as to whether efficiency of oxalate absorption is dependent on the proportion of total dietary oxalate that is in a soluble form. However, studies that directly compared foods of varying soluble oxalate contents have generally supported the proposition that the amount of soluble oxalate in food is an important determinant of oxalate bioavailability. Oxalate degradation by oxalate-degrading bacteria within the GIT is another key factor that could affect oxalate absorption and degree of oxaluria. Studies that have assessed the efficacy of oral ingestion of probiotics that provide bacteria with oxalate-degrading capacity have led to promising but generally mixed results, and this remains a fertile area for future studies. PMID:22332057

  11. Continuity of Genetic and Environmental Influences on Cognition across the Life Span: A Meta-Analysis of Longitudinal Twin and Adoption Studies

    PubMed Central

    Tucker-Drob, Elliot M.; Briley, Daniel A.

    2014-01-01

    The longitudinal rank-order stability of cognitive ability increases dramatically over the lifespan. Multiple theoretical perspectives have proposed that genetic and/or environmental mechanisms underlie the longitudinal stability of cognition, and developmental trends therein. However, the patterns of stability of genetic and environmental influences on cognition over the lifespan largely remain poorly understood. We searched for longitudinal studies of cognition that reported raw genetically-informative longitudinal correlations or parameter estimates from longitudinal behavior genetic models. We identified 150 combinations of time points and measures from 15 independent longitudinal samples. In total, longitudinal data came from 4,538 monozygotic twin pairs raised together, 7,777 dizygotic twin pairs raised together, 34 monozygotic twin pairs raised apart, 78 dizygotic twin pairs raised apart, 141 adoptive sibling pairs, and 143 non-adoptive sibling pairs, ranging in age from infancy through late adulthood. At all ages, cross-time genetic correlations and shared environmental correlations were substantially larger than cross-time nonshared environmental correlations. Cross-time correlations for genetic and shared environmental components were low during early childhood, increased sharply over child development, and remained relatively high from adolescence through late adulthood. Cross-time correlations for nonshared environmental components were low across childhood and increased gradually to moderate magnitudes in adulthood. Increasing phenotypic stability over child development was almost entirely mediated by genetic factors. Time-based decay of genetic and shared environmental stability was more pronounced earlier in child development. Results are interpreted in reference to theories of gene-environment interaction and correlation. PMID:24611582

  12. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  13. Resolved Structure of the Arp 220 Nuclei at λ ≈ 3 mm

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kazushi; Aalto, Susanne; Barcos-Muñoz, Loreto; Costagliola, Francesco; Evans, Aaron S.; Harada, Nanase; Martín, Sergio; Wiedner, Martina; Wilner, David

    2017-11-01

    We analyze the 3 mm emission of the ultraluminous infrared galaxy Arp 220 for the spatially resolved structure and the spectral properties of the merger nuclei. ALMA archival data at ˜0.″05 resolution are used for extensive visibility fitting and deep imaging of the continuum emission. The data are fitted well by two concentric components for each nucleus, such as two Gaussians or one Gaussian plus one exponential disk. The larger components in the individual nuclei are similar in shape and extent, ˜100-150 pc, to the centimeter wave emission due to supernovae. They are therefore identified with the known starburst nuclear disks. The smaller components in both nuclei have about a few 10 pc sizes and peak brightness temperatures ({T}{{b}}) more than twice higher than those in previous single-Gaussian fitting. They correspond to the dust emission that we find centrally concentrated in both nuclei by subtracting the plasma emission measured at 33 GHz. The dust emission in the western nucleus is found to have a peak {T}{{b}}≈ 530 K and an FWHM of about 20 pc. This component is estimated to have a bolometric luminosity on the order of {10}12.5 {L}⊙ and a 20 pc scale luminosity surface density {10}15.5 {{L}}⊙ {{{k}}{{p}}{{c}}}-2. A luminous active galactic nucleus is a plausible energy source for these high values while other explanations remain to be explored. Our continuum image also reveals a third structural component of the western nucleus—a pair of faint spurs perpendicular to the disk major axis. We attribute it to a bipolar outflow from the highly inclined (I≈ 60^\\circ ) western nuclear disk.

  14. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  15. Associations between microvascular function and short-term exposure to traffic-related air pollution and particulate matter oxidative potential.

    PubMed

    Zhang, Xian; Staimer, Norbert; Tjoa, Tomas; Gillen, Daniel L; Schauer, James J; Shafer, Martin M; Hasheminassab, Sina; Pakbin, Payam; Longhurst, John; Sioutas, Constantinos; Delfino, Ralph J

    2016-07-26

    Short-term exposure to ambient air pollution has been associated with acute increases in cardiovascular hospitalization and mortality. However, causative chemical components and underlying pathophysiological mechanisms remain to be clarified. We hypothesized that endothelial dysfunction would be associated with mobile-source (traffic) air pollution and that pollutant components with higher oxidative potential to generate reactive oxygen species (ROS) would have stronger associations. We carried out a cohort panel study in 93 elderly non-smoking adults living in the Los Angeles metropolitan area, during July 2012-February 2014. Microvascular function, represented by reactive hyperemia index (RHI), was measured weekly for up to 12 weeks (N = 845). Air pollutant data included daily data from regional air-monitoring stations, five-day average PM chemical components and oxidative potential in three PM size-fractions, and weekly personal nitrogen oxides (NOx). Linear mixed-effect models estimated adjusted changes in microvascular function with exposure. RHI was inversely associated with traffic-related pollutants such as ambient PM2.5 black carbon (BC), NOx, and carbon monoxide (CO). An interquartile range change increase (1.06 μg/m(3)) in 5-day average BC was associated with decreased RHI, -0.093 (95 % CI: -0.151, -0.035). RHI was inversely associated with other mobile-source components/tracers (polycyclic aromatic hydrocarbons, elemental carbon, and hopanes), and PM oxidative potential as quantified in two independent assays (dithiothreitol and in vitro macrophage ROS) in accumulation and ultrafine PM, and transition metals. Our findings suggest that short-term exposures to traffic-related air pollutants with high oxidative potential are major components contributing to microvascular dysfunction.

  16. A preliminary study on postmortem interval estimation of suffocated rats by GC-MS/MS-based plasma metabolic profiling.

    PubMed

    Sato, Takako; Zaitsu, Kei; Tsuboi, Kento; Nomura, Masakatsu; Kusano, Maiko; Shima, Noriaki; Abe, Shuntaro; Ishii, Akira; Tsuchihashi, Hitoshi; Suzuki, Koichi

    2015-05-01

    Estimation of postmortem interval (PMI) is an important goal in judicial autopsy. Although many approaches can estimate PMI through physical findings and biochemical tests, accurate PMI calculation by these conventional methods remains difficult because PMI is readily affected by surrounding conditions, such as ambient temperature and humidity. In this study, Sprague-Dawley (SD) rats (10 weeks) were sacrificed by suffocation, and blood was collected by dissection at various time intervals (0, 3, 6, 12, 24, and 48 h; n = 6) after death. A total of 70 endogenous metabolites were detected in plasma by gas chromatography-tandem mass spectrometry (GC-MS/MS). Each time group was separated from each other on the principal component analysis (PCA) score plot, suggesting that the various endogenous metabolites changed with time after death. To prepare a prediction model of a PMI, a partial least squares (or projection to latent structure, PLS) regression model was constructed using the levels of significantly different metabolites determined by variable importance in the projection (VIP) score and the Kruskal-Wallis test (P < 0.05). Because the constructed PLS regression model could successfully predict each PMI, this model was validated with another validation set (n = 3). In conclusion, plasma metabolic profiling demonstrated its ability to successfully estimate PMI under a certain condition. This result can be considered to be the first step for using the metabolomics method in future forensic casework.

  17. Model identification of signal transduction networks from data using a state regulator problem.

    PubMed

    Gadkar, K G; Varner, J; Doyle, F J

    2005-03-01

    Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.

  18. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    NASA Astrophysics Data System (ADS)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  19. Boosting multi-state models.

    PubMed

    Reulen, Holger; Kneib, Thomas

    2016-04-01

    One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.

  20. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  1. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  2. Estimating times of surgeries with two component procedures: comparison of the lognormal and normal models.

    PubMed

    Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E

    2003-01-01

    Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.

  3. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  4. Kalman Filter for Spinning Spacecraft Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Sedlak, Joseph E.

    2008-01-01

    This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.

  5. Uncertainty assessment method for the Cs-137 fallout inventory and penetration depth.

    PubMed

    Papadakos, G N; Karangelos, D J; Petropoulos, N P; Anagnostakis, M J; Hinis, E P; Simopoulos, S E

    2017-05-01

    Within the presented study, soil samples were collected in year 2007 at 20 different locations of the Greek terrain, both from the surface and also from depths down to 26 cm. Sampling locations were selected primarily from areas where high levels of 137 Cs deposition after the Chernobyl accident had already been identified by the Nuclear Engineering Laboratory of the National Technical University of Athens during and after the year of 1986. At one location of relatively higher deposition, soil core samples were collected following a 60 m by 60 m Cartesian grid with a 20 m node-to-node distance. Single or pair core samples were also collected from the remaining 19 locations. Sample measurements and analysis were used to estimate 137 Cs inventory and the corresponding depth migration, twenty years after the deposition on Greek terrain. Based on these data, the uncertainty components of the whole sampling-to-results procedure were investigated. A cause-and-effect assessment process was used to apply the law of error propagation and demonstrate that the dominating significant component of the combined uncertainty is that due to the spatial variability of the contemporary (2007) 137 Cs inventory. A secondary, yet also significant component was identified to be the activity measurement process itself. Other less-significant uncertainty parameters were sampling methods, the variation in the soil field density with depth and the preparation of samples for measurement. The sampling grid experiment allowed for the quantitative evaluation of the uncertainty due to spatial variability, also by the assistance of the semivariance analysis. Denser, optimized grid could return more accurate values for this component but with a significantly elevated laboratory cost, in terms of both, human and material resources. Using the hereby collected data and for the case of a single core soil sampling using a well-defined sampling methodology quality assurance, the uncertainty component due to spatial variability was evaluated to about 19% for the 137 Cs inventory and up to 34% for the 137 Cs penetration depth. Based on the presented results and also on related literature, it is argued that such high uncertainties should be anticipated for single core samplings conducted using similar methodology and employed as 137 Cs inventory and penetration depth estimators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Advanced Ground Systems Maintenance Prognostics Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.

    2015-01-01

    The project implements prognostics capabilities to predict when a component system or subsystem will no longer meet desired functional or performance criteria, called the end of life. The capability also provides an assessment of the remaining useful life of a hardware component. The project enables the delivery of system health advisories to ground system operators. This project will use modeling techniques and algorithms to assess components' health andpredict remaining life for such components. The prognostics capability being developed will beused:during the design phase and during pre/post operations to conduct planning and analysis ofsystem design, maintenance & logistics plans, and system/mission operations plansduring real-time operations to monitor changes to components' health and assess their impacton operations.This capability will be interfaced to Ground Operations' command and control system as a part ofthe AGSM project to help assure system availability and mission success. The initial modelingeffort for this capability will be developed for Liquid Oxygen ground loading applications.

  7. Molecular coordination of Staphylococcus aureus cell division

    PubMed Central

    Cotterell, Bryony E; Walther, Christa G; Fenn, Samuel J; Grein, Fabian; Wollman, Adam JM; Leake, Mark C; Olivier, Nicolas; Cadby, Ashley; Mesnage, Stéphane; Jones, Simon

    2018-01-01

    The bacterial cell wall is essential for viability, but despite its ability to withstand internal turgor must remain dynamic to permit growth and division. Peptidoglycan is the major cell wall structural polymer, whose synthesis requires multiple interacting components. The human pathogen Staphylococcus aureus is a prolate spheroid that divides in three orthogonal planes. Here, we have integrated cellular morphology during division with molecular level resolution imaging of peptidoglycan synthesis and the components responsible. Synthesis occurs across the developing septal surface in a diffuse pattern, a necessity of the observed septal geometry, that is matched by variegated division component distribution. Synthesis continues after septal annulus completion, where the core division component FtsZ remains. The novel molecular level information requires re-evaluation of the growth and division processes leading to a new conceptual model, whereby the cell cycle is expedited by a set of functionally connected but not regularly distributed components. PMID:29465397

  8. Training the public health workforce at the National School of Public Health: meeting Africa's needs.

    PubMed

    Mokwena, Kebogile; Mokgatle-Nthabu, Mathilda; Madiba, Sphiwe; Lewis, Helen; Ntuli-Ngcobo, Busi

    2008-01-01

    The inadequate number of trained public health personnel in Africa remains a challenge. In sub-Saharan Africa, the estimated workforce of public health practitioners is 1.3% of the world's health workforce addressing 25% of the world's burden of disease. To address this gap, the National School of Public Health at the then Medical University of Southern Africa created an innovative approach using distance learning components to deliver its public health programmes. Compulsory classroom teaching is limited to four two-week blocks. Combining mainly online components with traditional classroom curricula reduced limitations caused by geographical distances. At the same time, the curriculum was structured to contextualize continental health issues in both course work and research specific to students' needs. The approach used by the National School of Public Health allows for a steady increase in the number of public health personnel in Africa. Because of the flexible e-learning components and African-specific research projects, graduates from 16 African countries could benefit from this programme. An evaluation showed that such programmes need to constantly motivate participants to reduce student dropout rates and computer literacy needs to be a pre-requisite for entry into the programme. Short certificate courses in relevant public health areas would be beneficial in the African context. This programme could be replicated in other regions of the continent.

  9. Training of public health workforce at the National School of Public Health: meeting Africa's needs.

    PubMed

    Mokwena, Kebogile; Mokgatle-Nthabu, Mathilda; Madiba, Sphiwe; Lewis, Helen; Ntuli-Ngcobo, Busi

    2007-12-01

    The inadequate number of trained public health personnel in Africa remains a challenge. In sub-Saharan Africa, the estimated workforce of public health practitioners is 1.3% of the world's health workforce addressing 25% of the world's burden of disease. To address this gap, the National School of Public Health at the then Medical University of Southern Africa created an innovative approach using distance learning components to deliver its public health programmes. Compulsory classroom teaching is limited to four two-week blocks. RELEVABT CHANGES: Combining mainly online components with traditional classroom curricula reduced limitations caused by geographical distances. At the same time, the curriculum was structured to contextualize continental health issues in both course work and research specific to students' needs. The approach used by the National School of Public Health allows for a steady increase in the number of public health personnel in Africa. Because of the flexible e-learning components and African-specific research projects, graduates from 16 African countries could avail of this programme. An evaluation showed that such programmes need to constantly motivate participants to reduce student dropout rates and computer literacy needs to be a pre-requisite for entry into the programme. Short certificate courses in relevant public health areas would be beneficial in the African context. This programme could be replicated in other regions of the continent.

  10. Viscoplastic crack initiation and propagation in crosslinked UHMWPE from clinically relevant notches up to 0.5mm radius.

    PubMed

    Sirimamilla, P Abhiram; Rimnac, Clare M; Furmanski, Jevan

    2018-01-01

    Highly crosslinked UHMWPE is now the material of choice for hard-on-soft bearing couples in total joint replacements. However, the fracture resistance of the polymer remains a design concern for increased longevity of the components in vivo. Fracture research utilizing the traditional linear elastic fracture mechanics (LEFM) or elastic plastic fracture mechanics (EPFM) approach has not yielded a definite failure criterion for UHMWPE. Therefore, an advanced viscous fracture model has been applied to various notched compact tension specimen geometries to estimate the fracture resistance of the polymer. Two generic crosslinked UHMWPE formulations (remelted 65kGy and remelted 100kGy) were analyzed in this study using notched test specimens with three different notch radii under static loading conditions. The results suggest that the viscous fracture model can be applied to crosslinked UHMWPE and a single value of critical energy governs crack initiation and propagation in the material. To our knowledge, this is one of the first studies to implement a mechanistic approach to study crack initiation and propagation in UHMWPE for a range of clinically relevant stress-concentration geometries. It is believed that a combination of structural analysis of components and material parameter quantification is a path to effective failure prediction in UHMWPE total joint replacement components, though additional testing is needed to verify the rigor of this approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Simulation and Characterization of a Miniaturized Scanning Electron Microscope

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica A.; Jerman, Gregory A.; Medley, Stephanie; Gregory, Don; Abbott, Terry O.; Sampson, Allen R.

    2011-01-01

    A miniaturized Scanning Electron Microscope (mSEM) for in-situ lunar investigations is being developed at NASA Marshall Space Flight Center with colleagues from the University of Alabama in Huntsville (UAH), Advanced Research Systems (ARS), the University of Tennessee in Knoxville (UTK) and Case Western Reserve University (CWRU). This effort focuses on the characterization of individual components of the mSEM and simulation of the complete system. SEMs can provide information on the size, shape, morphology and chemical composition of lunar regolith. Understanding these basic properties will allow us to better estimate the challenges associated with In-Situ Resource Utilization and to improve our basic science knowledge of the lunar surface (either precluding the need for sample return or allowing differentiation of unique samples to be returned to Earth.) The main components of the mSEM prototype includes: a cold field emission electron gun (CFEG), focusing lens, deflection/scanning system and backscatter electron detector. Of these, the electron gun development is of particular importance as it dictates much of the design of the remaining components. A CFEG was chosen for use with the lunar mSEM as its emission does not depend on heating of the tungsten emitter (lower power), it offers a long operation lifetime, is orders of magnitude brighter than tungsten hairpin guns, has a small source size and exhibits low beam energy spread.

  12. Separation of pedogenic and lithogenic components of magnetic susceptibility in the Chinese loess/palaeosol sequence as determined by the CBD procedure and a mixing analysis

    NASA Astrophysics Data System (ADS)

    Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.

    2000-08-01

    Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.

  13. Shore erosion as a sediment source to the tidal Potomac River, Maryland and Virginia

    USGS Publications Warehouse

    Miller, Andrew J.

    1987-01-01

    The shoreline of the tidal Potomac River attained its present form as a result of the Holocene episode of sea-level rise; the drowned margins of the system are modified by wave activity in the shore zone and by slope processes on banks steepened by basal-wave erosion. Shore erosion leaves residual sand and gravel in shallow water and transports silt and clay offshore to form a measurable component of the suspended-sediment load of the tidal Potomac River. Erosion rates were measured by comparing digitized historical shoreline maps and modern maps, and by comparing stereopairs of aerial photographs taken at different points in time, with the aid of an interactive computer-graphics system and a digitizing stereoplotter. Cartographic comparisons encompassed 90 percent of the study reach and spanned periods of 38 to 109 years, with most measurements spanning at least 84 years. Photogrammetric comparisons encompassed 49 percent of the study reach and spanned 16 to 40 years. Field monitoring of erosion rates and processes at two sites, Swan Point Neck, Maryland, and Mason Neck, Virginia, spanned periods of 10 to 18 months. Estimated average recession rates of shoreline in the estuary, based on cartographic and photogrammetric measurements, were 0.42 to 0.52 meter per annum (Virginia shore) and 0.31 to 0.41 meter per annum (Maryland shore). Average recession rates of shoreline in the tidal river and transition zone were close to 0.15 meter per annum. Estimated average volume-erosion rates along the estuary were 1.20 to 1.87 cubic meters per meter of shoreline per annum (Virginia shore) and 0.56 to 0.73 cubic meter per meter of shoreline per annum (Maryland shore); estimated average volume-erosion rates along the shores of the tidal river and transition zone were 0.55 to 0.74 cubic meter per meter of shoreline per annum. Estimated total sediment contributed to the tidal Potomac River by shore erosion was 0.375 x 10 6 to 0.565 x 10 6 metric tons per annum; of this, the estimated amount of silt and clay ranged from 0.153x10 6 to 0.226x10 6 metric tons per annum. Between 49 and 60 percent of the sediment was derived from the Virginia shore of the estuary; 14 to 18 percent was derived from the Maryland shore of the estuary; and 23 to 36 percent was derived from the shores of the tidal river and transition zone. The adjusted modern estimate of sediment eroded from the shoreline of the estuary is about 55 percent of the historical estimate. Sediment eroded from the shoreline accounted for about 6 to 9 percent of the estimated total suspended load for the tidal Potomac River during water years 1979 through 1981 and for about 11 to 18 percent of the suspended load delivered to the estuary during the same period. Annual suspended-sediment loads derived from upland source areas fluctuated by about an order of magnitude during the 3 years of record (1979-81); shore erosion may have been a more important component of the sediment budget during periods of low flow than during periods of higher discharges. Prior to massive land clearance during the historical period of intensive agriculture in the 18th and 19th centuries, annual sediment loads from upland sources probably were smaller than they are at present; under these circumstances shore erosion would have been an important component of the sediment budget. At current rates of sediment supply, relative sea-level rise, and shoreline recession, the landward parts of the tidal Potomac River are rapidly being filled by sediment. If these rates were to remain constant over time, and no sediment were to escape into Chesapeake Bay, the tidal river and transition zone would be filled within 600 years, and the total system would be filled in less than 4,000 years. Given a slower rate of sediment supply, comparable to the measured rate during the low-flow 1981 water year, the volume of the tidal Potomac River might remain relatively stable or even increase over time. Changes in rates

  14. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  15. In Spite of Indeterminacy Many Common Factor Score Estimates Yield an Identical Reproduced Covariance Matrix

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2007-01-01

    It was investigated whether commonly used factor score estimates lead to the same reproduced covariance matrix of observed variables. This was achieved by means of Schonemann and Steiger's (1976) regression component analysis, since it is possible to compute the reproduced covariance matrices of the regression components corresponding to different…

  16. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    USDA-ARS?s Scientific Manuscript database

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  17. [Detection of quadratic phase coupling between EEG signal components by nonparamatric and parametric methods of bispectral analysis].

    PubMed

    Schmidt, K; Witte, H

    1999-11-01

    Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.

  18. Physicochemical assessment criteria for high-voltage pulse capacitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darian, L. A., E-mail: LDarian@rambler.ru; Lam, L. Kh.

    In the paper, the applicability of decomposition products of internal insulation of high-voltage pulse capacitors is considered (aging is the reason for decomposition products of internal insulation). Decomposition products of internal insulation of high-voltage pulse capacitors can be used to evaluate their quality when in operation and in service. There have been three generations of markers of aging of insulation as in the case with power transformers. The area of applicability of markers of aging of insulation for power transformers has been studied and the area can be extended to high-voltage pulse capacitors. The research reveals that there is amore » correlation between the components and quantities of markers of aging of the first generation (gaseous decomposition products of insulation) dissolved in insulating liquid and the remaining life of high-voltage pulse capacitors. The application of markers of aging to evaluate the remaining service life of high-voltage pulse capacitor is a promising direction of research, because the design of high-voltage pulse capacitors keeps stability of markers of aging of insulation in high-voltage pulse capacitors. It is necessary to continue gathering statistical data concerning development of markers of aging of the first generation. One should also carry out research aimed at estimation of the remaining life of capacitors using markers of the second and the third generation.« less

  19. Zika virus infections: An overview of current scenario.

    PubMed

    Dasti, Javid Iqbal

    2016-07-01

    Zika virus (ZIKV) was discovered more than half a century ago, recently it has gained unprecedented attention by the global health community. Until 2007, only 14 cases of human ZIKV infections were reported around the globe, while during the current outbreak, estimated cases mounted to approximately 1.5 million in Brazil alone, the virus was disseminated to wider South-American territories and travel-associated ZIKV infections were reported in USA, Europe and recently in China. ZIKV infections remain asymptomatic in approximately 80% of the individuals, and no anti-viral treatments were recommended. Yet, neurological complications associated with the infections, such as infant microcephaly and Guillain-Barré syndrome are major cause of the concern. Although, based on small numbers of cases, existing evidence strongly supports an exclusive link of viral infection and observed neurological complications. However, much work remains to assign exact numbers of complications caused by ZIKV. Regarding its structural attributes ZIKV shows remarkable resemblance with dengue virus and West-Nile virus. Despite, genomes of different ZIKV strains have already been decoded; role of the viral components in infection process and particularly pathogenesis of the disease remain widely unclear. In vulnerable areas, most viable strategy to ensure public health safety is vector control and enhanced public awareness about the transmission of the disease. Copyright © 2016 Hainan Medical College. Production and hosting by Elsevier B.V. All rights reserved.

  20. Bias and robustness of uncertainty components estimates in transient climate projections

    NASA Astrophysics Data System (ADS)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate

  1. Simulation of Microdamage and Evaluation of Remaining Life of Steam Conduit Components from New-Generation Refractory Steel 10Kh9MF-Sh

    NASA Astrophysics Data System (ADS)

    Gladshtein, V. I.

    2018-03-01

    The effects of microdamage on the remaining life of high-temperature components of steam conduits from high-chromium steel 10Kh9MF-Sh and low-alloy steel 12Kh1M1F are compared. To simulate the microdamage, specimens with a circular notch and different relative diameters are fabricated. Specimens with a notch simulating the highest degree of microdamage and smooth specimens are tested for long-term strength. The coefficient of the remaining life of a conduit is computed for the range of relative damage presenting practical interest.

  2. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  3. Analysis of Performance of Jet Engine from Characteristics of Components II : Interaction of Components as Determined from Engine Operation

    NASA Technical Reports Server (NTRS)

    Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl

    1949-01-01

    In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.

  4. Comprehensive investigation into historical pipeline construction costs and engineering economic analysis of Alaska in-state gas pipeline

    NASA Astrophysics Data System (ADS)

    Rui, Zhenhua

    This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.

  5. Paleogeodesy of the Southern Santa Cruz Mountains Frontal Thrusts, Silicon Valley, CA

    NASA Astrophysics Data System (ADS)

    Aron, F.; Johnstone, S. A.; Mavrommatis, A. P.; Sare, R.; Hilley, G. E.

    2015-12-01

    We present a method to infer long-term fault slip rate distributions using topography by coupling a three-dimensional elastic boundary element model with a geomorphic incision rule. In particular, we used a 10-m-resolution digital elevation model (DEM) to calculate channel steepness (ksn) throughout the actively deforming southern Santa Cruz Mountains in Central California. We then used these values with a power-law incision rule and the Poly3D code to estimate slip rates over seismogenic, kilometer-scale thrust faults accommodating differential uplift of the relief throughout geologic time. Implicit in such an analysis is the assumption that the topographic surface remains unchanged over time as rock is uplifted by slip on the underlying structures. The fault geometries within the area are defined based on surface mapping, as well as active and passive geophysical imaging. Fault elements are assumed to be traction-free in shear (i.e., frictionless), while opening along them is prohibited. The free parameters in the inversion include the components of the remote strain-rate tensor (ɛij) and the bedrock resistance to channel incision (K), which is allowed to vary according to the mapped distribution of geologic units exposed at the surface. The nonlinear components of the geomorphic model required the use of a Markov chain Monte Carlo method, which simulated the posterior density of the components of the remote strain-rate tensor and values of K for the different mapped geologic units. Interestingly, posterior probability distributions of ɛij and K fall well within the broad range of reported values, suggesting that the joint use of elastic boundary element and geomorphic models may have utility in estimating long-term fault slip-rate distributions. Given an adequate DEM, geologic mapping, and fault models, the proposed paleogeodetic method could be applied to other crustal faults with geological and morphological expressions of long-term uplift.

  6. Cost-effectiveness evaluation of bovine tuberculosis surveillance in wildlife in France (Sylvatub system) using scenario trees.

    PubMed

    Rivière, Julie; Le Strat, Yann; Hendrikx, Pascal; Dufour, Barbara

    2017-01-01

    Bovine tuberculosis (bTB) is a common disease in cattle and wildlife, with health, zoonotic and economic implications. Infected wild animals, and particularly reservoirs, could hinder eradication of bTB from cattle populations, which could have an important impact on international cattle trade. Therefore, surveillance of bTB in wildlife is of particular importance to better understand the epidemiological role of wild species and to adapt the control measures. In France, a bTB surveillance system for free-ranging wildlife, the Sylvatub system, has been implemented since 2011. It relies on three surveillance components (SSCs) (passive surveillance on hunted animals (EC-SSC), passive surveillance on dead or dying animals (SAGIR-SSC) and active surveillance (PSURV-SSC)). The effectiveness of the Sylvatub system was previously assessed, through the estimation of its sensitivity (i.e. the probability of detecting at least one case of bTB infection by each SSC, specie and risk-level area). However, to globally assess the performance of a surveillance system, the measure of its sensitivity is not sufficient, as other factors such as economic or socio-economic factors could influence the effectiveness. We report here an estimation of the costs of the surveillance activities of the Sylvatub system, and of the cost-effectiveness of each surveillance component, by specie and risk-level, based on scenario tree modelling with the same tree structure as used for the sensitivity evaluation. The cost-effectiveness of the Sylvatub surveillance is better in higher-risk departments, due in particular to the higher probability of detecting the infection (sensitivity). Moreover, EC-SSC, which has the highest unit cost, is more efficient than the surveillance enhanced by the SAGIR-SSC, due to its better sensitivity. The calculation of the cost-effectiveness ratio shows that PSURV-SSC remains the most cost-effective surveillance component of the Sylvatub system, despite its high cost in terms of coordination, sample collection and laboratory analysis.

  7. Cost-effectiveness evaluation of bovine tuberculosis surveillance in wildlife in France (Sylvatub system) using scenario trees

    PubMed Central

    2017-01-01

    Bovine tuberculosis (bTB) is a common disease in cattle and wildlife, with health, zoonotic and economic implications. Infected wild animals, and particularly reservoirs, could hinder eradication of bTB from cattle populations, which could have an important impact on international cattle trade. Therefore, surveillance of bTB in wildlife is of particular importance to better understand the epidemiological role of wild species and to adapt the control measures. In France, a bTB surveillance system for free-ranging wildlife, the Sylvatub system, has been implemented since 2011. It relies on three surveillance components (SSCs) (passive surveillance on hunted animals (EC-SSC), passive surveillance on dead or dying animals (SAGIR-SSC) and active surveillance (PSURV-SSC)). The effectiveness of the Sylvatub system was previously assessed, through the estimation of its sensitivity (i.e. the probability of detecting at least one case of bTB infection by each SSC, specie and risk-level area). However, to globally assess the performance of a surveillance system, the measure of its sensitivity is not sufficient, as other factors such as economic or socio-economic factors could influence the effectiveness. We report here an estimation of the costs of the surveillance activities of the Sylvatub system, and of the cost-effectiveness of each surveillance component, by specie and risk-level, based on scenario tree modelling with the same tree structure as used for the sensitivity evaluation. The cost-effectiveness of the Sylvatub surveillance is better in higher-risk departments, due in particular to the higher probability of detecting the infection (sensitivity). Moreover, EC-SSC, which has the highest unit cost, is more efficient than the surveillance enhanced by the SAGIR-SSC, due to its better sensitivity. The calculation of the cost-effectiveness ratio shows that PSURV-SSC remains the most cost-effective surveillance component of the Sylvatub system, despite its high cost in terms of coordination, sample collection and laboratory analysis. PMID:28800642

  8. Gender, Position of Authority, and the Risk of Depression and Posttraumatic Stress Disorder among a National Sample of U.S. Reserve Component Personnel.

    PubMed

    Cohen, Gregory H; Sampson, Laura A; Fink, David S; Wang, Jing; Russell, Dale; Gifford, Robert; Fullerton, Carol; Ursano, Robert; Galea, Sandro

    2016-01-01

    Recent U.S. military operations in Iraq and Afghanistan have seen dramatic increases in the proportion of women serving and the breadth of their occupational roles. General population studies suggest that women, compared with men, and persons with lower, as compared with higher, social position may be at greater risk of posttraumatic stress disorder (PTSD) and depression. However, these relations remain unclear in military populations. Accordingly, we aimed to estimate the effects of 1) gender, 2) military authority (i.e., rank), and 3) the interaction of gender and military authority on a) risk of most recent deployment-related PTSD and b) risk of depression since most recent deployment. Using a nationally representative sample of 1,024 previously deployed Reserve Component personnel surveyed in 2010, we constructed multivariable logistic regression models to estimate effects of interest. Weighted multivariable logistic regression models demonstrated no statistically significant associations between gender or authority, and either PTSD or depression. Interaction models demonstrated multiplicative statistical interaction between gender and authority for PTSD (beta = -2.37; p = .01), and depression (beta = -1.21; p = .057). Predicted probabilities of PTSD and depression, respectively, were lowest in male officers (0.06, 0.09), followed by male enlisted (0.07, 0.14), female enlisted (0.07, 0.15), and female officers (0.30, 0.25). Female officers in the Reserve Component may be at greatest risk for PTSD and depression after deployment, relative to their male and enlisted counterparts, and this relation is not explained by deployment trauma exposure. Future studies may fruitfully examine whether social support, family responsibilities peri-deployment, or contradictory class status may explain these findings. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  9. Dairy farm cost efficiency.

    PubMed

    Tauer, L W; Mishra, A K

    2006-12-01

    A stochastic cost equation was estimated for US dairy farms using national data from the production year 2000 to determine how farmers might reduce their cost of production. Cost of producing a unit of milk was estimated into separate frontier (efficient) and inefficiency components, with both components estimated as a function of management and causation variables. Variables were entered as impacting the frontier component as well as the efficiency component of the stochastic curve because a priori both components could be impacted. A factor that has an impact on the cost frontier was the number of hours per day the milking facility is used. Using the milking facility for more hours per day decreased frontier costs; however, inefficiency increased with increased hours of milking facility use. Thus, farmers can decrease costs with increased utilization of the milking facility, but only if they are efficient in this strategy. Parlors compared with stanchions used for milking did not decrease frontier costs, but decreased costs because of increased efficiency, as did the use of a nutritionist. Use of rotational grazing decreased frontier costs but also increased inefficiency. Older farmers were less efficient.

  10. A real time neural net estimator of fatigue life

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1990-01-01

    A neural network architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopumps blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high-frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines.

  11. Orientation of three-component geophones in the San Andreas Fault observatory at depth Pilot Hole, Parkfield, California

    USGS Publications Warehouse

    Oye, V.; Ellsworth, W.L.

    2005-01-01

    To identify and constrain the target zone for the planned SAFOD Main Hole through the San Andreas Fault (SAF) near Parkfield, California, a 32-level three-component (3C) geophone string was installed in the Pilot Hole (PH) to monitor and improve the locations of nearby earthquakes. The orientation of the 3C geophones is essential for this purpose, because ray directions from sources may be determined directly from the 3D particle motion for both P and S waves. Due to the complex local velocity structure, rays traced from explosions and earthquakes to the PH show strong ray bending. Observed azimuths are obtained from P-wave polarization analysis, and ray tracing provides theoretical estimates of the incoming wave field. The differences between the theoretical and the observed angles define the calibration azimuths. To investigate the process of orientation with respect to the assumed velocity model, we compare calibration azimuths derived from both a homogeneous and 3D velocity model. Uncertainties in the relative orientation between the geophone levels were also estimated for a cluster of 36 earthquakes that was not used in the orientation process. The comparison between the homogeneous and the 3D velocity model shows that there are only minor changes in these relative orientations. In contrast, the absolute orientations, with respect to global North, were significantly improved by application of the 3D model. The average data residual decreased from 13?? to 7??, supporting the importance of an accurate velocity model. We explain the remaining residuals by methodological uncertainties and noise and with errors in the velocity model.

  12. A Model of Compound Heterozygous, Loss-of-Function Alleles Is Broadly Consistent with Observations from Complex-Disease GWAS Datasets

    PubMed Central

    Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.

    2017-01-01

    The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232

  13. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  14. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  15. Protracted exposure to fallout: the Rongelap and Utirik experience.

    PubMed

    Lessard, E T; Miltenberger, R P; Cohn, S H; Musolino, S V; Conard, R A

    1984-03-01

    From June 1946 to August 1958, the U.S. Department of Defense and the U.S. Atomic Energy Commission (AEC) conducted nuclear weapons tests in the Northern Marshall Islands. On 1 March 1954, BRAVO, an above-ground test in the Castle series, produced high levels of radioactive material, some of which subsequently fell on Rongelap and Utirik Atolls due to an unexpected wind shift. On 3 March 1954, the inhabitants of these atolls were moved out of the affected area. They later returned to Utirik in June 1954 and to Rongelap in June 1957. Comprehensive environmental and personnel radiological monitoring programs were initiated in the mid 1950s by Brookhaven National Laboratory to ensure that body burdens of the exposed Marshallese subjects remained within AEC guidelines. Their body-burden histories and calculated activity ingestion rate patterns post-return are presented along with estimates of internal committed effective dose equivalents. External exposure data are also included. In addition, relationships between body burden or urine-activity concentration and declining continuous intake were developed. The implications of these studies are: (1) the dietary intake of 137Cs was a major component contributing to the committed effective dose equivalent for the years after the initial contamination of the atolls; (2) for persons whose diet included fish, 65Zn was a major component of committed effective dose equivalent during the first years post-return; (3) a decline in the daily activity ingestion rate greater than that resulting from radioactive decay of the source was estimated for 137Cs, 65Zn, 90Sr and 60Co; (4) the relative impact of each nuclide on the estimate of committed effective dose equivalent was dependent upon the time interval between initial contamination and rehabilitation; and (5) the internal committed effective dose equivalent exceeded the external dose equivalent by a factor of 1.1 at Utirik and 1.5 at Rongelap during the rehabitation period. Few reliable 239Pu measurements on human excreta were made. An analysis of the tentative data leads to the conclusion that a reliable estimate of committed effective dose equivalent requires further research.

  16. The development of a post-mortem interval estimation for human remains found on land in the Netherlands.

    PubMed

    Gelderman, H T; Boer, L; Naujocks, T; IJzermans, A C M; Duijst, W L J M

    2018-05-01

    The decomposition process of human remains can be used to estimate the post-mortem interval (PMI), but decomposition varies due to many factors. Temperature is believed to be the most important and can be connected to decomposition by using the accumulated degree days (ADD). The aim of this research was to develop a decomposition scoring method and to develop a formula to estimate the PMI by using the developed decomposition scoring method and ADD.A decomposition scoring method and a Book of Reference (visual resource) were made. Ninety-one cases were used to develop a method to estimate the PMI. The photographs were scored using the decomposition scoring method. The temperature data was provided by the Royal Netherlands Meteorological Institute. The PMI was estimated using the total decomposition score (TDS) and using the TDS and ADD. The latter required an additional step, namely to calculate the ADD from the finding date back until the predicted day of death.The developed decomposition scoring method had a high interrater reliability. The TDS significantly estimates the PMI (R 2  = 0.67 and 0.80 for indoor and outdoor bodies, respectively). When using the ADD, the R 2 decreased to 0.66 and 0.56.The developed decomposition scoring method is a practical method to measure decomposition for human remains found on land. The PMI can be estimated using this method, but caution is advised in cases with a long PMI. The ADD does not account for all the heat present in a decomposing remain and is therefore a possible bias.

  17. Modelling the impact of mulching the soil with plant remains on water regime formation, crop yield and energy costs in agricultural ecosystems

    NASA Astrophysics Data System (ADS)

    Gusev, Yeugeniy M.; Dzhogan, Larisa Y.; Nasonova, Olga N.

    2018-02-01

    The model MULCH, developed by authors previously for simulating the formation of water regime in an agricultural field covered by straw mulch layer, has been used for the comparative evaluation of the efficiency of four agricultural cultivation technologies, which are usually used for wheat production in different regions of Russia and Ukraine. It simulates the dynamics of water budget components in a soil rooting zone at daily time step from the beginning of spring snowmelt to the beginning of the period with stable negative air temperatures. The model was designed for estimation of mulching efficiency in terms of increase in plant water supply and crop yield under climatic and soil conditions of the steppe and forest-steppe zones. It is used for studying the mulching effect on some characteristics of water regime and yield of winter wheat growing at specific sites located in semi-arid and arid regions of the steppe and forest-steppe zones of the eastern and southern parts of the East-European (Russian) plain. In addition, a previously developed technique for estimating the energetic efficiency of various agricultural technologies with accounting for their impact on changes in soil energy is applied for the comparative evaluation of the efficiency of four agricultural cultivation technologies, which are usually used for wheat production in different regions of the steppe and forest-steppe zones of the European Russia: (1) moldboard tillage of soil without irrigation, (2) moldboard tillage of soil with irrigation, (3) subsurface cultivation, and (4) subsurface cultivation with mulching the soil with plant remains.

  18. Satellite angular velocity estimation based on star images and optical flow techniques.

    PubMed

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-09-25

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  19. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    PubMed Central

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  20. Suppression of cognitive function in hyperthermia; From the viewpoint of executive and inhibitive cognitive processing

    NASA Astrophysics Data System (ADS)

    Shibasaki, Manabu; Namba, Mari; Oshiro, Misaki; Kakigi, Ryusuke; Nakata, Hiroki

    2017-03-01

    Climate change has had a widespread impact on humans and natural systems. Heat stroke is a life-threatening condition in severe environments. The execution or inhibition of decision making is critical for survival in a hot environment. We hypothesized that, even with mild heat stress, not only executive processing, but also inhibitory processing may be impaired, and investigated the effectiveness of body cooling approaches on these processes using the Go/No-go task with electroencephalographic event-related potentials. Passive heat stress increased esophageal temperature (Tes) by 1.30 ± 0.24 °C and decreased cerebral perfusion and thermal comfort. Mild heat stress reduced the amplitudes of the Go-P300 component (i.e. execution) and No-go-P300 component (i.e. inhibition). Cerebral perfusion and thermal comfort recovered following face/head cooling, however, the amplitudes of the Go-P300 and No-go-P300 components remained reduced. During whole-body cooling, the amplitude of the Go-P300 component returned to the pre-heat baseline, whereas that of the No-go-P300 component remained reduced. These results suggest that local cooling of the face and head does not restore impaired cognitive processing during mild heat stress, and response inhibition remains impaired despite the return to normothermia.

  1. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  2. A Unified Model for GRB Prompt Emission from Optical to Gamma-Rays; Exploring GRBs as Standard Candles

    NASA Technical Reports Server (NTRS)

    Guiriec, S.; Kouveliotou, C.; Hartmann, D. H.; Granot, J.; Asano, K.; Meszaros, P.; Gill, R.; Gehrels, N.; McEnery, J.

    2016-01-01

    The origin of prompt emission from gamma-ray bursts (GRBs) remains to be an open question. Correlated prompt optical and gamma-ray emission observed in a handful of GRBs strongly suggests a common emission region, but failure to adequately fit the broadband GRB spectrum prompted the hypothesis of different emission mechanisms for the low- and high-energy radiations. We demonstrate that our multi-component model for GRB -ray prompt emission provides an excellent fit to GRB 110205A from optical to gamma-ray energies. Our results show that the optical and highest gamma-ray emissions have the same spatial and spectral origin, which is different from the bulk of the X- and softest gamma-ray radiation. Finally, our accurate redshift estimate for GRB 110205A demonstrates promise for using GRBs as cosmological standard candles.

  3. Investigating error structure of shuttle radar topography mission elevation data product

    NASA Astrophysics Data System (ADS)

    Becek, Kazimierz

    2008-08-01

    An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.

  4. Application of fracture mechanics and half-cycle method to the prediction of fatigue life of B-52 aircraft pylon components

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Carter, A. L.; Totton, W. W.; Ficke, J. M.

    1989-01-01

    Stress intensity levels at various parts of the NASA B-52 carrier aircraft pylon were examined for the case when the pylon store was the space shuttle solid rocket booster drop test vehicle. Eight critical stress points were selected for the pylon fatigue analysis. Using fracture mechanics and the half-cycle theory (directly or indirectly) for the calculations of fatigue-crack growth ,the remaining fatigue life (number of flights left) was estimated for each critical part. It was found that the two rear hooks had relatively short fatigue life and that the front hook had the shortest fatigue life of all the parts analyzed. The rest of the pylon parts were found to be noncritical because of their extremely long fatigue life associated with the low operational stress levels.

  5. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    NASA Astrophysics Data System (ADS)

    Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.

    1990-07-01

    Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.

  6. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    USGS Publications Warehouse

    Benner, R.; Hatcher, P.G.; Hedges, J.I.

    1990-01-01

    Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.

  7. A motion-tolerant approach for monitoring SpO2 and heart rate using photoplethysmography signal with dual frame length processing and multi-classifier fusion.

    PubMed

    Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao

    2017-12-01

    Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Spatial and temporal changes in the Barents Sea pelagic compartment during the recent warming

    NASA Astrophysics Data System (ADS)

    Eriksen, Elena; Skjoldal, Hein Rune; Gjøsæter, Harald; Primicerio, Raul

    2017-02-01

    The Barents Sea has experienced substantial warming over the last few decades with expansion of relatively warm Atlantic water and reduction in sea ice. Based on a review of relevant literature and additional analyses, we report changes in the pelagic compartment associated with this warming using data from autumn surveys (acoustic capelin, 0-group fish, and ecosystem surveys). We estimated biomass for 25 components of the pelagic community, including macroplankton, 0-group fish, and juvenile and adult pelagic fish, were examined for spatial and temporal variation over the period 1993-2013. The estimated total biomass of the investigated pelagic compartment, not including mesozooplankton, ranged between about 6 and 30 million tonnes wet weight with an average of 17 million tonnes over the 21-years period. Krill was the dominant biomass component (63%), whereas pelagic fish (capelin, polar cod and herring) made up 26% and 0-group fish 11% of the biomass on average. The spatial distribution of biomass showed a broad-scale pattern reflecting differences in distribution of the main pelagic fishes (capelin in the north, polar cod in the east, and herring in the south) and transport of krill and 0-group fish with the Atlantic water flowing into the southern Barents Sea. Dividing the Barents Sea into six regions, the highest average biomass values were found in the Southwestern and South-Central subareas (about 4 million tonnes in each), with krill as the main component. Biomass was also high in the North-Central subarea (about 3 million tonnes) where capelin was the major contributor. The total estimated biomass of the pelagic compartment remained relatively stable during each of two main periods (before and after 2004), but increased by a factor of two from around 11 million tonnes in the first to around 23 million tonnes in the last period. The pronounced increase reflected the warming between the relatively cold 1990s and the warmer 2000s and was driven mainly by an increase in krill due presumably to increased advection. Variable recruitment of fish had a strong influence on the variation in pelagic biomass, first as 0-group fish (including demersal species such as cod and haddock) and subsequently over the next years manifested as strong or weak year classes of dominant pelagic species. Associated with the warming there was also a northern or eastern extension of the distribution of several components although the broad-scale geographical pattern of biomass distribution remained similar between the first and the last parts of the investigated period. The capelin stock, a dominant species with a substantial contribution to total biomass, experienced two collapses followed by recoveries in the 1990s and 2000s. The apparent stability in total biomass in each of the two periods (before and after 2004) reflected compensating and dampening mechanisms. In the first period, krill showed an inverse relationship with capelin, increasing when the capelin stock was low. In the second period, other fishes including juvenile herring, polar cod and blue whiting increased to fill the 'void' of the low capelin stock. The syntheses reported here provides a basis for modelling some of the key players and dominating processes and drivers of change in the ecosystem.

  9. A Centerless Circular Array Method: Extracting Maximal Information on Phase Velocities of Rayleigh Waves From Microtremor Records From a Simple Seismic Array

    NASA Astrophysics Data System (ADS)

    Cho, I.; Tada, T.; Shinozaki, Y.

    2005-12-01

    We have developed a Centerless Circular Array (CCA) method of microtremor exploration, an algorithm that enables to estimate phase velocities of Rayleigh waves by analyzing vertical-component records of microtremors that are obtained with an array of three or five seismic sensors placed around a circumference. Our CCA method shows a remarkably high performance in long-wavelength ranges because, unlike the frequency-wavenumber spectral method, our method does not resolve individual plane-wave components in the process of identifying phase velocities. Theoretical considerations predict that the resolving power of our CCA method in long-wavelength ranges depends upon the SN ratio, or the ratio of power of the propagating components to that of the non-propagating components (incoherent noise) contained in the records from the seismic array. The applicability of our CCA method to small-sized arrays on the order of several meters in radius has already been confirmed in our earlier work (Cho et al., 2004). We have deployed circular seismic arrays of different sizes at test sites in Japan where the underground structure is well documented through geophysical exploration, and have applied our CCA method to microtremor records to estimate phase velocities of Rayleigh waves. The estimates were then checked against "model" phase velocities that are derived from theoretical calculations. For arrays of 5, 25, 300 and 600 meters in radii, the estimated and model phase velocities demonstrated fine agreement within a broad wavelength range extending from a little larger than 3r (r: the array radius) up to at least 40r, 14r, 42r and 9r, respectively. This demonstrates the applicability of our CCA method to arrays on the order of several to several hundreds of meters in radii, and also illustrates, in a typical way, the markedly high performance of our CCA method in long-wavelength ranges. We have also invented a mathematical model that enables to evaluate the SN ratio in a given microtremor field, and have applied it to real data. Theory predicts that our CCA method underestimates the phase velocities when noise is present. Using the evaluated SN ratio and the phase velocity dispersion curve model, we have calculated the apparent values of phase velocities which theory expects should be obtained by our CCA method in long-wavelength ranges, and have confirmed that the outcome agreed very well with the phase velocities estimated from real data. This demonstrates that the mathematical assumptions, on which our CCA method relies, remains valid over a wide range of wavelengths which we are examining, and also implies that, even in the absence of a priori knowledge of the phase velocity dispersion curve, the SN ratio evaluated with our mathematical model could be used to identify the resolution limit of our CCA method in long-wavelength ranges. We have thus been able to demonstrate, on the basis of theoretical considerations and real data analysis, both the capabilities and limitations of our CCA method.

  10. Self-Estimation of Blood Alcohol Concentration: A Review

    PubMed Central

    Aston, Elizabeth R.; Liguori, Anthony

    2013-01-01

    This article reviews the history of blood alcohol concentration (BAC) estimation training, which trains drinkers to discriminate distinct BAC levels and thus avoid excessive alcohol consumption. BAC estimation training typically combines education concerning alcohol metabolism with attention to subjective internal cues associated with specific concentrations. Estimation training was originally conceived as a component of controlled drinking programs. However, dependent drinkers were unsuccessful in BAC estimation, likely due to extreme tolerance. In contrast, moderate drinkers successfully acquired this ability. A subsequent line of research translated laboratory estimation studies to naturalistic settings by studying large samples of drinkers in their preferred drinking environments. Thus far, naturalistic studies have provided mixed results regarding the most effective form of BAC feedback. BAC estimation training is important because it imparts an ability to perceive individualized impairment that may be present below the legal limit for driving. Consequently, the training can be a useful component for moderate drinkers in drunk driving prevention programs. PMID:23380489

  11. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  12. Annual estimates of recharge, quick-flow runoff, and ET for the contiguous U.S. using empirical regression equations

    USGS Publications Warehouse

    Reitz, Meredith; Sanford, Ward E.; Senay, Gabriel; Cazenas, J.

    2017-01-01

    This study presents new data-driven, annual estimates of the division of precipitation into the recharge, quick-flow runoff, and evapotranspiration (ET) water budget components for 2000-2013 for the contiguous United States (CONUS). The algorithms used to produce these maps ensure water budget consistency over this broad spatial scale, with contributions from precipitation influx attributed to each component at 800 m resolution. The quick-flow runoff estimates for the contribution to the rapidly varying portion of the hydrograph are produced using data from 1,434 gaged watersheds, and depend on precipitation, soil saturated hydraulic conductivity, and surficial geology type. Evapotranspiration estimates are produced from a regression using water balance data from 679 gaged watersheds and depend on land cover, temperature, and precipitation. The quick-flow and ET estimates are combined to calculate recharge as the remainder of precipitation. The ET and recharge estimates are checked against independent field data, and the results show good agreement. Comparisons of recharge estimates with groundwater extraction data show that in 15% of the country, groundwater is being extracted at rates higher than the local recharge. These maps of the internally consistent water budget components of recharge, quick-flow runoff, and ET, being derived from and tested against data, are expected to provide reliable first-order estimates of these quantities across the CONUS, even where field measurements are sparse.

  13. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  14. A Secure Trust Establishment Scheme for Wireless Sensor Networks

    PubMed Central

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2014-01-01

    Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471

  15. Higher heritabilities for gait components than for overall gait scores may improve mobility in ducks.

    PubMed

    Duggan, Brendan M; Rae, Anne M; Clements, Dylan N; Hocking, Paul M

    2017-05-02

    Genetic progress in selection for greater body mass and meat yield in poultry has been associated with an increase in gait problems which are detrimental to productivity and welfare. The incidence of suboptimal gait in breeding flocks is controlled through the use of a visual gait score, which is a subjective assessment of walking ability of each bird. The subjective nature of the visual gait score has led to concerns over its effectiveness in reducing the incidence of suboptimal gait in poultry through breeding. The aims of this study were to assess the reliability of the current visual gait scoring system in ducks and to develop a more objective method to select for better gait. Experienced gait scorers assessed short video clips of walking ducks to estimate the reliability of the current visual gait scoring system. Kendall's coefficients of concordance between and within observers were estimated at 0.49 and 0.75, respectively. In order to develop a more objective scoring system, gait components were visually scored on more than 4000 pedigreed Pekin ducks and genetic parameters were estimated for these components. Gait components, which are a more objective measure, had heritabilities that were as good as, or better than, those of the overall visual gait score. Measurement of gait components is simpler and therefore more objective than the standard visual gait score. The recording of gait components can potentially be automated, which may increase accuracy further and may improve heritability estimates. Genetic correlations were generally low, which suggests that it is possible to use gait components to select for an overall improvement in both economic traits and gait as part of a balanced breeding programme.

  16. Organic aerosol in the summertime southeastern United States: components and their link to volatility distribution, oxidation state and hygroscopicity

    NASA Astrophysics Data System (ADS)

    Kostenidou, Evangelia; Karnezi, Eleni; Hite, James R., Jr.; Bougiatioti, Aikaterini; Cerully, Kate; Xu, Lu; Ng, Nga L.; Nenes, Athanasios; Pandis, Spyros N.

    2018-04-01

    The volatility distribution of the organic aerosol (OA) and its sources during the Southern Oxidant and Aerosol Study (SOAS; Centreville, Alabama) was constrained using measurements from an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a thermodenuder (TD). Positive matrix factorization (PMF) analysis was applied on both the ambient and thermodenuded high-resolution mass spectra, leading to four factors: more oxidized oxygenated OA (MO-OOA), less oxidized oxygenated OA (LO-OOA), an isoprene epoxydiol (IEPOX)-related factor (isoprene-OA) and biomass burning OA (BBOA). BBOA had the highest mass fraction remaining (MFR) at 100 °C, followed by the isoprene-OA, and the LO-OOA. Surprisingly the MO-OOA evaporated the most in the TD. The estimated effective vaporization enthalpies assuming an evaporation coefficient equal to unity were 58 ± 13 kJ mol-1 for the LO-OOA, 89 ± 10 kJ mol-1 for the MO-OOA, 55 ± 11 kJ mol-1 for the BBOA, and 63 ± 15 kJ mol-1 for the isoprene-OA. The estimated volatility distribution of all factors covered a wide range including both semi-volatile and low-volatility components. BBOA had the lowest average volatility of all factors, even though it had the lowest O : C ratio among all factors. LO-OOA was the more volatile factor and its high MFR was due to its low enthalpy of vaporization according to the model. The isoprene-OA factor had intermediate volatility, quite higher than suggested by a few other studies. The analysis suggests that deducing the volatility of a factor only from its MFR could lead to erroneous conclusions. The oxygen content of the factors can be combined with their estimated volatility and hygroscopicity to provide a better view of their physical properties.

  17. The uncertainties and causes of the recent changes in global evapotranspiration from 1982 to 2010

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Dai, Aiguo

    2017-07-01

    Recent studies have shown considerable changes in terrestrial evapotranspiration (ET) since the early 1980s, but the causes of these changes remain unclear. In this study, the relative contributions of external climate forcing and internal climate variability to the recent ET changes are examined. Three datasets of global terrestrial ET and the CMIP5 multi-model ensemble mean ET are analyzed, respectively, to quantify the apparent and externally-forced ET changes, while the unforced ET variations are estimated as the apparent ET minus the forced component. Large discrepancies of the ET estimates, in terms of their trend, variability, and temperature- and precipitation-dependence, are found among the three datasets. Results show that the forced global-mean ET exhibits an upward trend of 0.08 mm day-1 century-1 from 1982 to 2010. The forced ET also contains considerable multi-year to decadal variations during the latter half of the 20th century that are caused by volcanic aerosols. The spatial patterns and interannual variations of the forced ET are more closely linked to precipitation than temperature. After removing the forced component, the global-mean ET shows a trend ranging from -0.07 to 0.06 mm day-1 century-1 during 1982-2010 with varying spatial patterns among the three datasets. Furthermore, linkages between the unforced ET and internal climate modes are examined. Variations in Pacific sea surface temperatures (SSTs) are found to be consistently correlated with ET over many land areas among the ET datasets. The results suggest that there are large uncertainties in our current estimates of global terrestrial ET for the recent decades, and the greenhouse gas (GHG) and aerosol external forcings account for a large part of the apparent trend in global-mean terrestrial ET since 1982, but Pacific SST and other internal climate variability dominate recent ET variations and changes over most regions.

  18. A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.

    PubMed

    Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel

    2015-03-23

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.

  19. A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines

    PubMed Central

    Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González

    2015-01-01

    Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876

  20. Hybrid solar collector using nonimaging optics and photovoltaic components

    NASA Astrophysics Data System (ADS)

    Winston, Roland; Yablonovitch, Eli; Jiang, Lun; Widyolar, Bennett K.; Abdelhamid, Mahmoud; Scranton, Gregg; Cygan, David; Kozlov, Alexandr

    2015-08-01

    The project team of University of California at Merced (UC-M), Gas Technology Institute, and Dr. Eli Yablonovitch of University of California at Berkeley developed a novel hybrid concentrated solar photovoltaic thermal (PV/T) collector using nonimaging optics and world record single-junction Gallium arsenide (GaAs) PV components integrated with particle laden gas as thermal transfer and storage media, to simultaneously generate electricity and high temperature dispatchable heat. The collector transforms a parabolic trough, commonly used in CSP plants, into an integrated spectrum-splitting device. This places a spectrum-sensitive topping element on a secondary reflector that is registered to the thermal collection loop. The secondary reflector transmits higher energy photons for PV topping while diverting the remaining lower energy photons to the thermal media, achieving temperatures of around 400°C even under partial utilization of the solar spectrum. The collector uses the spectral selectivity property of Gallium arsenide (GaAs) cells to maximize the exergy output of the system, resulting in an estimated exergy efficiency of 48%. The thermal media is composed of fine particles of high melting point material in an inert gas that increases heat transfer and effectively stores excess heat in hot particles for later on-demand use.

  1. Texture and composition of the Rosa Marina beach sands (Adriatic coast, southern Italy): a sedimentological/ecological approach

    NASA Astrophysics Data System (ADS)

    Moretti, Massimo; Tropeano, Marcello; Loon, A. J. (Tom) van; Acquafredda, Pasquale; Baldacconi, Rossella; Festa, Vincenzo; Lisco, Stefania; Mastronuzzi, Giuseppe; Moretti, Vincenzo; Scotti, Rosa

    2016-06-01

    Beach sands from the Rosa Marina locality (Adriatic coast, southern Italy) were analysed mainly microscopically in order to trace the source areas of their lithoclastic and bioclastic components. The main cropping out sedimentary units were also studied with the objective to identify the potential source areas of lithoclasts. This allowed to establish how the various rock units contribute to the formation of beach sands. The analysis of the bioclastic components allows to estimate the actual role of organisms regarding the supply of this material to the beach. Identification of taxa that are present in the beach sands as shell fragments or other remains was carried out at the genus or family level. Ecological investigation of the same beach and the recognition of sub-environments (mainly distinguished on the basis of the nature of the substrate and of the water depth) was the key topic that allowed to establish the actual source areas of bioclasts in the Rosa Marina beach sands. The sedimentological analysis (including a physical study of the beach and the calculation of some statistical parameters concerning the grain-size curves) shows that the Rosa Marina beach is nowadays subject to erosion.

  2. Coarse initial orbit determination for a geostationary satellite using single-epoch GPS measurements.

    PubMed

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-04-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite's state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state.

  3. Coarse Initial Orbit Determination for a Geostationary Satellite Using Single-Epoch GPS Measurements

    PubMed Central

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-01-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite’s state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state. PMID:25835299

  4. Magma-assisted rifting in Ethiopia.

    PubMed

    Kendall, J-M; Stuart, G W; Ebinger, C J; Bastow, I D; Keir, D

    2005-01-13

    The rifting of continents and evolution of ocean basins is a fundamental component of plate tectonics, yet the process of continental break-up remains controversial. Plate driving forces have been estimated to be as much as an order of magnitude smaller than those required to rupture thick continental lithosphere. However, Buck has proposed that lithospheric heating by mantle upwelling and related magma production could promote lithospheric rupture at much lower stresses. Such models of mechanical versus magma-assisted extension can be tested, because they predict different temporal and spatial patterns of crustal and upper-mantle structure. Changes in plate deformation produce strain-enhanced crystal alignment and increased melt production within the upper mantle, both of which can cause seismic anisotropy. The Northern Ethiopian Rift is an ideal place to test break-up models because it formed in cratonic lithosphere with minor far-field plate stresses. Here we present evidence of seismic anisotropy in the upper mantle of this rift zone using observations of shear-wave splitting. Our observations, together with recent geological data, indicate a strong component of melt-induced anisotropy with only minor crustal stretching, supporting the magma-assisted rifting model in this area of initially cold, thick continental lithosphere.

  5. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    PubMed

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Toward robust estimation of the components of forest population change: simulation results

    Treesearch

    Francis A. Roesch

    2014-01-01

    This report presents the full simulation results of the work described in Roesch (2014), in which multiple levels of simulation were used to test the robustness of estimators for the components of forest change. In that study, a variety of spatial-temporal populations were created based on, but more variable than, an actual forest monitoring dataset, and then those...

  7. The method of trend analysis of parameters time series of gas-turbine engine state

    NASA Astrophysics Data System (ADS)

    Hvozdeva, I.; Myrhorod, V.; Derenh, Y.

    2017-10-01

    This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.

  8. Do Different Young Plantation-Grown Species Require Different Biomass Models?

    Treesearch

    Bryce E. Schlaegel; Harvey E. Kennedy

    1985-01-01

    Sweetgum and water oak trees sampled from a plantation over 7 years were used to test whether primary tree component (bole wood, bole bark, limb wood, limb bark, and leaves) predictions could be summed to estimate total bole, total limb, and total tree values. Estimations by summing primary component predictions were not significantly different from predictions for the...

  9. Pitch and Yaw Trajectory Measurement Comparison Between Automated Video Analysis and Onboard Sensor Data Analysis Techniques

    DTIC Science & Technology

    2013-09-01

    ORGANIZATION REPORT NUMBER ARL-TR-6576 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11 . SPONSOR... 11 Figure 11 . Estimated angle-of-attack components history, projectile no.2... 11 Figure 12. Comparison of angle-of-attack component estimates, projectile no.2. ........................12 Figure 13. Total angle-of

  10. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  11. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  12. Public-private partnerships in the response to HIV: experience from the resource industry in Papua New Guinea.

    PubMed

    Miles, K; Conlon, M; Stinshoff, J; Hutton, R

    2014-01-01

    Although Papua New Guinea (PNG) has made some progress in social development over the past 30 years, the country's Human Development Index has slowed in recent years, placing it below the regional average. In 2012, the estimated HIV prevalence for adults aged 15-49 years was 0.5% and an estimated 25,000 people were living with HIV. Although reduced from previous estimates, the country's HIV prevalence remains the highest in the South Pacific region. While the faith-based and non-governmental sectors have engaged in HIV interventions since the epidemic began, until recently the corporate sector has remained on the margins of the national response. In 2008, the country's largest oil and gas producer began partnering with national and provincial health authorities, development partners and global financing institutions to contribute to the national HIV strategy and implementation plan. This article provides an overview of public-private partnerships (PPPs) and their application to public health program management, and then describes the PPP that was developed in PNG. Innovative national and local PPPs have become a core component of healthcare strategy in many countries. PPPs have many forms and their use in low- and middle-income countries has progressively demonstrated increased service outputs and health outcomes beyond what the public sector alone could achieve. A PPP in PNG has resulted in an oil and gas producer engaging in the response to HIV, including managing the country's US$46 million HIV grant from the Global Fund to Fight AIDS, Tuberculosis and Malaria. Given the increasing expectations of the international community in relation to corporate responsibility and sustainability, the role of the corporate sector in countries like PNG is critical. Combining philanthropic investment with business strategy, expertise and organisational resource can contribute to enhancing health system structures and capacity.

  13. Cross-phase separation of nanowires and nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Fang; Duoss, Eric; Han, Jinkyu

    In one embodiment, a process includes creating a mixture of an aqueous component, nanowires and nanoparticles, and a hydrophobic solvent and allowing migration of the nanowires to the hydrophobic solvent, where the nanoparticles remain in the aqueous component. Moreover, the nanowires and nanoparticles are in the aqueous component before the migration.

  14. Evaluation of biochar-anaerobic potato digestate mixtures as renewable components of horticultural potting media

    USDA-ARS?s Scientific Manuscript database

    Various formulations are used in horticultural potting media, with sphagnum peat moss, vermiculite and perlite currently among the most common components. We are examining a dried anaerobic digestate remaining after the fermentation of potato processing wastes to replace organic components such as p...

  15. ASSOCIATIONS BETWEEN PARTICULATE MATTER COMPONENTS AND DAILY MORTALITY AND MORBIDITY IN PHILADELPHIA, PA

    EPA Science Inventory

    In evaluating the health risks from particulate matter (PM), the question remains as to which component(s) of PM are most harmful. We investigated this issue using PM mass, PM constituents, mortality, and the elderly hospital admission data in Philadelphia, PA. Daily paired PM...

  16. North Polar Radiative Flux Variability from 2002 Through 2014

    NASA Technical Reports Server (NTRS)

    Rutan, David; Rose, Fred; Doelling, David; Kato, Seiji; Smith, Bill, Jr.

    2017-01-01

    NASA's Clouds and the Earth's Radiant Energy System (CERES) project produces the SYN1Deg data product. SYN1deg provides global, 1deg gridded, hourly estimates of Top of Atmosphere (TOA) (CERES observations and calculations) and atmospheric and surface radiative flux (calculations). Examples of 12 year North Polar averages of some variables are shown to the right. Given recent interest in polar science we focus here on TOA and Surface validation of calculated irradiant fluxes. TOA upward longwave irradiance calculations match the CERES observations well both spatially and temporally with correlations remaining strong through PC 6. Compare SYN1Deg Calculations & Meteorological Teleconnections. TOA reflected shortwave irradiance calculations match the CERES observations well both spatially and temporally with correlations remaining string through PC 7. Comparing SYN1Deg calculations to teleconnection patterns requires expanding the area to 30N for EOF analyses. Correlating the Principal Components of various variables to teleconnection time series indicates which variable is most highly correlated with which teleconnection signal. The tables indicate the Pacific North American Oscillation is most correlated to the OLR EOF 1, and the North American Oscillation is correlated most closely to surface LW flux down EOF 1.

  17. Women use voice parameters to assess men's characteristics

    PubMed Central

    Bruckert, Laetitia; Liénard, Jean-Sylvain; Lacroix, André; Kreutzer, Michel; Leboucher, Gérard

    2005-01-01

    The purpose of this study was: (i) to provide additional evidence regarding the existence of human voice parameters, which could be reliable indicators of a speaker's physical characteristics and (ii) to examine the ability of listeners to judge voice pleasantness and a speaker's characteristics from speech samples. We recorded 26 men enunciating five vowels. Voices were played to 102 female judges who were asked to assess vocal attractiveness and speakers' age, height and weight. Statistical analyses were used to determine: (i) which physical component predicted which vocal component and (ii) which vocal component predicted which judgment. We found that men with low-frequency formants and small formant dispersion tended to be older, taller and tended to have a high level of testosterone. Female listeners were consistent in their pleasantness judgment and in their height, weight and age estimates. Pleasantness judgments were based mainly on intonation. Female listeners were able to correctly estimate age by using formant components. They were able to estimate weight but we could not explain which acoustic parameters they used. However, female listeners were not able to estimate height, possibly because they used intonation incorrectly. Our study confirms that in all mammal species examined thus far, including humans, formant components can provide a relatively accurate indication of a vocalizing individual's characteristics. Human listeners have the necessary information at their disposal; however, they do not necessarily use it. PMID:16519239

  18. Estimating Vibrational Powers Of Parts In Fluid Machinery

    NASA Technical Reports Server (NTRS)

    Harvey, S. A.; Kwok, L. C.

    1995-01-01

    In new method of estimating vibrational power associated with component of fluid-machinery system, physics of flow through (or in vicinity of) component regarded as governing vibrations. Devised to generate scaling estimates for design of new parts of rocket engines (e.g., pumps, combustors, nozzles) but applicable to terrestrial pumps, turbines, and other machinery in which turbulent flows and vibrations caused by such flows are significant. Validity of method depends on assumption that fluid flows quasi-steadily and that flow gives rise to uncorrelated acoustic powers in different parts of pump.

  19. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Ho-Ling; Davis, Stacy Cagle

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less

  20. Three-dimensional analysis of magnetometer array data

    NASA Technical Reports Server (NTRS)

    Richmond, A. D.; Baumjohann, W.

    1984-01-01

    A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.

  1. Signals, resistance to change, and conditioned reinforcement in a multiple schedule.

    PubMed

    Bell, Matthew C; Gomez, Belen E; Kessler, Kira

    2008-06-01

    The effect of signals on resistance to change was evaluated using pigeons responding on a three-component multiple schedule. Each component contained a variable-interval initial link followed by a fixed-time terminal link. One component was an unsignaled-delay schedule, and two were equivalent signaled-delay schedules. After baseline training, resistance to change was assessed through (a) extinction and (b) adding free food to the intercomponent interval. During these tests, the signal stimulus from one of the signaled-delay components (SIG-T) was replaced with the initial-link stimulus from that component, converting it to an unsignaled-delay schedule. That signal stimulus was added to the delay period of the unsignaled-delay component (UNS), converting it to a signaled-delay schedule. The remaining signaled component remained unchanged (SIG-C). Resistance-to-change tests showed removing the signal had a minimal effect on resistance to change in the SIG-T component compared to the unchanged SIG-C component except for one block during free-food testing. Adding the signal to the UNS component significantly increased response rates suggesting that component had low response strength. Interestingly, the direction of the effect was in the opposite direction from what is typically observed. Results are consistent with the conclusion that the signal functioned as a conditioned reinforcer and inconsistent with a generalization-decrement explanation.

  2. Component separation of a isotropic Gravitational Wave Background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parida, Abhishek; Jhingan, Sanjay; Mitra, Sanjit, E-mail: abhishek@jmi.ac.in, E-mail: sanjit@iucaa.in, E-mail: sjhingan@jmi.ac.in

    2016-04-01

    A Gravitational Wave Background (GWB) is expected in the universe from the superposition of a large number of unresolved astrophysical sources and phenomena in the early universe. Each component of the background (e.g., from primordial metric perturbations, binary neutron stars, milli-second pulsars etc.) has its own spectral shape. Many ongoing experiments aim to probe GWB at a variety of frequency bands. In the last two decades, using data from ground-based laser interferometric gravitational wave (GW) observatories, upper limits on GWB were placed in the frequency range of 0∼ 50−100 Hz, considering one spectral shape at a time. However, one strong componentmore » can significantly enhance the estimated strength of another component. Hence, estimation of the amplitudes of the components with different spectral shapes should be done jointly. Here we propose a method for 'component separation' of a statistically isotropic background, that can, for the first time, jointly estimate the amplitudes of many components and place upper limits. The method is rather straightforward and needs negligible amount of computation. It utilises the linear relationship between the measurements and the amplitudes of the actual components, alleviating the need for a sampling based method, e.g., Markov Chain Monte Carlo (MCMC) or matched filtering, which are computationally intensive and cumbersome in a multi-dimensional parameter space. Using this formalism we could also study how many independent components can be separated using a given dataset from a network of current and upcoming ground based interferometric detectors.« less

  3. Separation of fNIRS Signals into Functional and Systemic Components Based on Differences in Hemodynamic Modalities

    PubMed Central

    Yamada, Toru; Umeyama, Shinji; Matsuda, Keiji

    2012-01-01

    In conventional functional near-infrared spectroscopy (fNIRS), systemic physiological fluctuations evoked by a body's motion and psychophysiological changes often contaminate fNIRS signals. We propose a novel method for separating functional and systemic signals based on their hemodynamic differences. Considering their physiological origins, we assumed a negative and positive linear relationship between oxy- and deoxyhemoglobin changes of functional and systemic signals, respectively. Their coefficients are determined by an empirical procedure. The proposed method was compared to conventional and multi-distance NIRS. The results were as follows: (1) Nonfunctional tasks evoked substantial oxyhemoglobin changes, and comparatively smaller deoxyhemoglobin changes, in the same direction by conventional NIRS. The systemic components estimated by the proposed method were similar to the above finding. The estimated functional components were very small. (2) During finger-tapping tasks, laterality in the functional component was more distinctive using our proposed method than that by conventional fNIRS. The systemic component indicated task-evoked changes, regardless of the finger used to perform the task. (3) For all tasks, the functional components were highly coincident with signals estimated by multi-distance NIRS. These results strongly suggest that the functional component obtained by the proposed method originates in the cerebral cortical layer. We believe that the proposed method could improve the reliability of fNIRS measurements without any modification in commercially available instruments. PMID:23185590

  4. Model-based tomographic reconstruction of objects containing known components.

    PubMed

    Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H

    2012-10-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.

  5. Energy recovery from organic fractions of municipal solid waste: A case study of Hyderabad city, Pakistan.

    PubMed

    Safar, Korai M; Bux, Mahar R; Aslam, Uqaili M; Ahmed, Memon S; Ahmed, Lashari I

    2016-04-01

    Non-renewable energy sources have remained the choice of the world for centuries. Rapid growth in population and industrialisation have caused their shortage and environmental degradation by using them. Thus, at the present rate of consumption, they will not last very long. In this prospective, this study has been conducted. The estimation of energy in terms of biogas and heat from various organic fractions of municipal solid waste is presented and discussed. The results show that organic fractions of municipal solid waste possess methane potential in the range of 3%-22% and their heat capacity ranges from 3007 to 20,099 kJ kg(-1) Also, theoretical biogas potential of different individual fruit as well as vegetable components and mixed food waste are analysed and estimated in the range of 608-1244 m(3) t(-1) Further, the share of bioenergy from municipal solid waste in the total primary energy supply in Pakistan has been estimated to be 1.82%. About 8.43% of present energy demand of the country could be met from municipal solid waste. The study leads us to the conclusion that the share of imported energy (i.e. 0.1% of total energy supply) and reduction in the amount of energy from fossil fuels can be achieved by adopting a waste-to-energy system in the country. © The Author(s) 2016.

  6. Agriculture is a major source of NO x pollution in California.

    PubMed

    Almaraz, Maya; Bai, Edith; Wang, Chao; Trousdell, Justin; Conley, Stephen; Faloona, Ian; Houlton, Benjamin Z

    2018-01-01

    Nitrogen oxides (NO x = NO + NO 2 ) are a primary component of air pollution-a leading cause of premature death in humans and biodiversity declines worldwide. Although regulatory policies in California have successfully limited transportation sources of NO x pollution, several of the United States' worst-air quality districts remain in rural regions of the state. Site-based findings suggest that NO x emissions from California's agricultural soils could contribute to air quality issues; however, a statewide estimate is hitherto lacking. We show that agricultural soils are a dominant source of NO x pollution in California, with especially high soil NO x emissions from the state's Central Valley region. We base our conclusion on two independent approaches: (i) a bottom-up spatial model of soil NO x emissions and (ii) top-down airborne observations of atmospheric NO x concentrations over the San Joaquin Valley. These approaches point to a large, overlooked NO x source from cropland soil, which is estimated to increase the NO x budget by 20 to 51%. These estimates are consistent with previous studies of point-scale measurements of NO x emissions from the soil. Our results highlight opportunities to limit NO x emissions from agriculture by investing in management practices that will bring co-benefits to the economy, ecosystems, and human health in rural areas of California.

  7. Overcoming statistical bias to estimate genetic mating systems in open populations: a comparison of Bateman's principles between the sexes in a sex-role-reversed pipefish.

    PubMed

    Mobley, Kenyon B; Jones, Adam G

    2013-03-01

    The genetic mating system is a key component of the sexual selection process, yet methods for the quantification of mating systems remain controversial. One approach involves metrics derived from Bateman's principles, which are based on variances in mating and reproductive success and the relationship between them. However, these measures are extremely difficult to measure for both sexes in open populations, because missing data can result in biased estimates. Here, we develop a novel approach for the estimation of mating system metrics based on Bateman's principles and apply it to a microsatellite-based parentage analysis of a natural population of the dusky pipefish, Syngnathus floridae. Our results show that both male and female dusky pipefish have significantly positive Bateman gradients. However, females exhibit larger values of the opportunity for sexual selection and the opportunity for selection compared to males. These differences translate into a maximum intensity of sexual selection (S'max) for females three times larger than that for males. Overall, this study identifies a critical source of bias that affects studies of mating systems in open populations, presents a novel method for overcoming this bias, and applies this method for the first time in a sex-role-reversed pipefish. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  8. Role of gravity-based information on the orientation and localization of the perceived body midline.

    PubMed

    Ceyte, Hadrien; Cian, Corinne; Nougier, Vincent; Olivier, Isabelle; Trousselard, Marion

    2007-01-01

    The present study focused on the influence of gravity-based information on the orientation and localization of the perceived body midline. The orientation was investigated by the rolling adjustment of a rod on the subjects' Z-axis and the localization by the horizontal adjustment of a visual dot as being straight ahead. Experiment 1 investigated the effect of the dissociation between the Z-axis and the direction of gravity by placing subjects in roll tilt and supine postures. In roll tilt, the perception of the body midline orientation was deviated in the direction of body tilt and the perception of its localization was deviated in the opposite direction. In the supine body orientation, estimates of the Z-axis and straight-ahead remained veridical as when the body was upright. Experiment 2 highlighted the relative importance of the otolithic and tactile information using diffuse pressure stimulation. The estimation of body midline orientation was modified contrarily to the estimation of its localization. Thus, subjects had no absolute representation of their egocentric space. The main hypothesis regarding the dissociation between the orientation and localization of the body midline may be related to a difference in the integration of sensory information. It can be suggested that the horizontal component of the vestibulo-ocular reflex (VOR) contributed to the perceived localization of the body midline, whereas its orientation was mainly influenced by tactile information.

  9. Accounting carbon storage in decaying root systems of harvested forests.

    PubMed

    Wang, G Geoff; Van Lear, David H; Hu, Huifeng; Kapeluck, Peter R

    2012-05-01

    Decaying root systems of harvested trees can be a significant component of belowground carbon storage, especially in intensively managed forests where harvest occurs repeatedly in relatively short rotations. Based on destructive sampling of root systems of harvested loblolly pine trees, we estimated that root systems contained about 32% (17.2 Mg ha(-1)) at the time of harvest, and about 13% (6.1 Mg ha(-1)) of the soil organic carbon 10 years later. Based on the published roundwood output data, we estimated belowground biomass at the time of harvest for loblolly-shortleaf pine forests harvested between 1995 and 2005 in South Carolina. We then calculated C that remained in the decomposing root systems in 2005 using the decay function developed for loblolly pine. Our calculations indicate that the amount of C stored in decaying roots of loblolly-shortleaf pine forests harvested between 1995 and 2005 in South Carolina was 7.1 Tg. Using a simple extrapolation method, we estimated 331.8 Tg C stored in the decomposing roots due to timber harvest from 1995 to 2005 in the conterminous USA. To fully account for the C stored in the decomposing roots of the US forests, future studies need (1) to quantify decay rates of coarse roots for major tree species in different regions, and (2) to develop a methodology that can determine C stock in decomposing roots resulting from natural mortality.

  10. Comparison of multi-subject ICA methods for analysis of fMRI data

    PubMed Central

    Erhardt, Erik Barry; Rachakonda, Srinivas; Bedrick, Edward; Allen, Elena; Adali, Tülay; Calhoun, Vince D.

    2010-01-01

    Spatial independent component analysis (ICA) applied to functional magnetic resonance imaging (fMRI) data identifies functionally connected networks by estimating spatially independent patterns from their linearly mixed fMRI signals. Several multi-subject ICA approaches estimating subject-specific time courses (TCs) and spatial maps (SMs) have been developed, however there has not yet been a full comparison of the implications of their use. Here, we provide extensive comparisons of four multi-subject ICA approaches in combination with data reduction methods for simulated and fMRI task data. For multi-subject ICA, the data first undergo reduction at the subject and group levels using principal component analysis (PCA). Comparisons of subject-specific, spatial concatenation, and group data mean subject-level reduction strategies using PCA and probabilistic PCA (PPCA) show that computationally intensive PPCA is equivalent to PCA, and that subject-specific and group data mean subject-level PCA are preferred because of well-estimated TCs and SMs. Second, aggregate independent components are estimated using either noise free ICA or probabilistic ICA (PICA). Third, subject-specific SMs and TCs are estimated using back-reconstruction. We compare several direct group ICA (GICA) back-reconstruction approaches (GICA1-GICA3) and an indirect back-reconstruction approach, spatio-temporal regression (STR, or dual regression). Results show the earlier group ICA (GICA1) approximates STR, however STR has contradictory assumptions and may show mixed-component artifacts in estimated SMs. Our evidence-based recommendation is to use GICA3, introduced here, with subject-specific PCA and noise-free ICA, providing the most robust and accurate estimated SMs and TCs in addition to offering an intuitive interpretation. PMID:21162045

  11. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  12. Association between component costs, study methodologies, and foodborne illness-related factors with the cost of nontyphoidal Salmonella illness.

    PubMed

    McLinden, Taylor; Sargeant, Jan M; Thomas, M Kate; Papadopoulos, Andrew; Fazil, Aamir

    2014-09-01

    Nontyphoidal Salmonella spp. are one of the most common causes of bacterial foodborne illness. Variability in cost inventories and study methodologies limits the possibility of meaningfully interpreting and comparing cost-of-illness (COI) estimates, reducing their usefulness. However, little is known about the relative effect these factors have on a cost-of-illness estimate. This is important for comparing existing estimates and when designing new cost-of-illness studies. Cost-of-illness estimates, identified through a scoping review, were used to investigate the association between descriptive, component cost, methodological, and foodborne illness-related factors such as chronic sequelae and under-reporting with the cost of nontyphoidal Salmonella spp. illness. The standardized cost of nontyphoidal Salmonella spp. illness from 30 estimates reported in 29 studies ranged from $0.01568 to $41.22 United States dollars (USD)/person/year (2012). The mean cost of nontyphoidal Salmonella spp. illness was $10.37 USD/person/year (2012). The following factors were found to be significant in multiple linear regression (p≤0.05): the number of direct component cost categories included in an estimate (0-4, particularly long-term care costs) and chronic sequelae costs (inclusion/exclusion), which had positive associations with the cost of nontyphoidal Salmonella spp. illness. Factors related to study methodology were not significant. Our findings indicated that study methodology may not be as influential as other factors, such as the number of direct component cost categories included in an estimate and costs incurred due to chronic sequelae. Therefore, these may be the most important factors to consider when designing, interpreting, and comparing cost of foodborne illness studies.

  13. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  14. The use of Leptodyctium riparium (Hedw.) Warnst in the estimation of minimum postmortem interval.

    PubMed

    Lancia, Massimo; Conforti, Federica; Aleffi, Michele; Caccianiga, Marco; Bacci, Mauro; Rossi, Riccardo

    2013-01-01

    The estimation of the postmortem interval (PMI) is still one of the most challenging issues in forensic investigations, especially in cases in which advanced transformative phenomena have taken place. The dating of skeletal remains is even more difficult and sometimes only a rough determination of the PMI is possible. Recent studies suggest that plant analysis can provide a reliable estimation for skeletal remains dating, when traditional techniques are not applicable. Forensic Botany is a relatively recent discipline that includes many subdisciplines such as Palynology, Anatomy, Dendrochronology, Limnology, Systematic, Ecology, and Molecular Biology. In a recent study, Cardoso et al. (Int J Legal Med 2010;124:451) used botanical evidence for the first time to establish the PMI of human skeletal remains found in a forested area of northern Portugal from the growth rate of mosses and shrub roots. The present paper deals with a case in which the study of the growth rate of the bryophyte Leptodyctium riparium (Hedw.) Warnst, was used in estimating the PMI of some human skeletal remains that were found in a wooded area near Perugia, in Central Italy. © 2012 American Academy of Forensic Sciences.

  15. Turbulent Burning Velocities of Two-Component Fuel Mixtures of Methane, Propane and Hydrogen

    NASA Astrophysics Data System (ADS)

    Kido, Hiroyuki; Nakahara, Masaya; Hashimoto, Jun; Barat, Dilmurat

    In order to clarify the turbulent burning velocity of multi-component fuel mixtures, both lean and rich two-component fuel mixtures, in which methane, propane and hydrogen were used as fuels, were prepared while maintaining the laminar burning velocity approximately constant. A distinct difference in the measured turbulent burning velocity at the same turbulence intensity is observed for two-component fuel mixtures having different addition rates of fuel, even the laminar burning velocities are approximately the same. The burning velocities of lean mixtures change almost constantly as the rate of addition changes, whereas the burning velocities of the rich mixtures show no such tendency. This trend can be explained qualitatively based on the mean local burning velocity, which is estimated by taking into account the preferential diffusion effect for each fuel component. In addition, a model of turbulent burning velocity proposed for single-component fuel mixtures may be applied to two-component fuel mixtures by considering the estimated mean local burning velocity of each fuel.

  16. Estimation of the Lithospheric Component Share in the Earth Natural Pulsed Electromagnetic Field Structure

    NASA Astrophysics Data System (ADS)

    Malyshkov, S. Y.; Gordeev, V. F.; Polyvach, V. I.; Shtalin, S. G.; Pustovalov, K. N.

    2017-04-01

    Article describes the results of the atmosphere and Earth’s crust climatic and ecological parameters integrated monitoring. The estimation is made for lithospheric component share in the Earth natural pulsed electromagnetic field structure. To estimate lithospheric component we performed a round-the-clock monitoring of the Earth natural pulsed electromagnetic field background variations at the experiment location and measured the Earth natural pulsed electromagnetic field under electric shields. Natural materials in a natural environment were used for shielding, specifically lakes with varying parameters of water conductivity. Skin effect was used in the experiment - it is the tendency of electromagnetic waves amplitude to decrease with greater depths in the conductor. Atmospheric and lithospheric component the Earth natural pulsed electromagnetic field data recorded on terrain was compared against the recorded data with atmosphere component decayed by an electric shield. In summary we have demonstrated in the experiment that thunderstorm discharge originating electromagnetic field decay corresponds to the decay calculated using Maxwell equations. In the absence of close lightning strikes the ratio of field intensity recorded on terrain to shielded field intensity is inconsistent with the ratio calculated for atmospheric sources, that confirms there is a lithospheric component present to the Earth natural pulsed electromagnetic field.

  17. Effectiveness and Cost Efficiency of Different Surveillance Components for Proving Freedom and Early Detection of Disease: Bluetongue Serotype 8 in Cattle as Case Study for Belgium, France and the Netherlands.

    PubMed

    Welby, S; van Schaik, G; Veldhuis, A; Brouwer-Middelesch, H; Peroz, C; Santman-Berends, I M; Fourichon, C; Wever, P; Van der Stede, Y

    2017-12-01

    Quick detection and recovery of country's freedom status remain a constant challenge in animal health surveillance. The efficacy and cost efficiency of different surveillance components in proving the absence of infection or (early) detection of bluetongue serotype 8 in cattle populations within different countries (the Netherlands, France, Belgium) using surveillance data from years 2006 and 2007 were investigated using an adapted scenario tree model approach. First, surveillance components (sentinel, yearly cross-sectional and passive clinical reporting) within each country were evaluated in terms of efficacy for substantiating freedom of infection. Yearly cross-sectional survey and passive clinical reporting performed well within each country with sensitivity of detection values ranging around 0.99. The sentinel component had a sensitivity of detection around 0.7. Secondly, how effective the components were for (early) detection of bluetongue serotype 8 and whether syndromic surveillance on reproductive performance, milk production and mortality data available from the Netherlands and Belgium could be of added value were evaluated. Epidemic curves were used to estimate the timeliness of detection. Sensitivity analysis revealed that expected within-herd prevalence and number of herds processed were the most influential parameters for proving freedom and early detection. Looking at the assumed direct costs, although total costs were low for sentinel and passive clinical surveillance components, passive clinical surveillance together with syndromic surveillance (based on reproductive performance data) turned out most cost-efficient for the detection of bluetongue serotype 8. To conclude, for emerging or re-emerging vectorborne disease that behaves such as bluetongue serotype 8, it is recommended to use passive clinical and syndromic surveillance as early detection systems for maximum cost efficiency and sensitivity. Once an infection is detected and eradicated, cross-sectional screening for substantiating freedom of infection and sentinel for monitoring the disease evolution are recommended. © 2016 Blackwell Verlag GmbH.

  18. Uncertainty in countrywide forest biomass estimates.

    Treesearch

    C.E. Peterson; D. Turner

    1994-01-01

    Country-wide estimates of forest biomass are the major driver for estimating and understanding carbon pools and flux, a critical component of global change research. Important determinants in making these estimates include the areal extent of forested lands and their associated biomass. Estimates for these parameters may be derived from surface-based data, photo...

  19. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  20. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  1. The mass of the compact object in the X-ray binary her X-1/HZ her

    NASA Astrophysics Data System (ADS)

    Abubekerov, M. K.; Antokhina, E. A.; Cherepashchuk, A. M.; Shimanskii, V. V.

    2008-05-01

    We have obtained the first estimates of the masses of the components of the Her X-1/HZ Her X-ray binary system taking into account non-LTE effects in the formation of the H γ absorption line: m x = 1.8 M ⊙ and m v = 2.5 M ⊙. These mass estimates were made in a Roche model based on the observed radial-velocity curve of the optical star, HZ Her. The masses for the X-ray pulsar and optical star obtained for an LTE model lie are m x = 0.85 ± 0.15 M ⊙ and m v = 1.87 ± 0.13 M ⊙. These mass estimates for the components of Her X-1/HZ Her derived from the radial-velocity curve should be considered tentative. Further mass estimates from high-precision observations of the orbital variability of the absorption profiles in a non-LTE model for the atmosphere of the optical component should be made.

  2. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  3. Internal Interdecadal Variability in CMIP5 Control Simulations

    NASA Astrophysics Data System (ADS)

    Cheung, A. H.; Mann, M. E.; Frankcombe, L. M.; England, M. H.; Steinman, B. A.; Miller, S. K.

    2015-12-01

    Here we make use of control simulations from the CMIP5 models to quantify the amplitude of the interdecadal internal variability component in Atlantic, Pacific, and Northern Hemisphere mean surface temperature. We compare against estimates derived from observations using a semi-empirical approach wherein the forced component as estimated using CMIP5 historical simulations is removed to yield an estimate of the residual, internal variability. While the observational estimates are largely consistent with those derived from the control simulations for both basins and the Northern Hemisphere, they lie in the upper range of the model distributions, suggesting the possibility of differences between the amplitudes of observed and modeled variability. We comment on some possible reasons for the disparity.

  4. A Systematic Review of the Frequency of Neurocyticercosis with a Focus on People with Epilepsy

    PubMed Central

    Ndimubanzi, Patrick C.; Carabin, Hélène; Budke, Christine M.; Nguyen, Hai; Qian, Ying-Jun; Rainwater, Elizabeth; Dickey, Mary; Reynolds, Stephanie; Stoner, Julie A.

    2010-01-01

    Background The objective of this study is to conduct a systematic review of studies reporting the frequency of neurocysticercosis (NCC) worldwide. Methods/Principal Findings PubMed, Commonwealth Agricultural Bureau (CAB) abstracts and 23 international databases were systematically searched for articles published from January 1, 1990 to June 1, 2008. Articles were evaluated for inclusion by at least two researchers focusing on study design and methods. Data were extracted independently using standardized forms. A random-effects binomial model was used to estimate the proportion of NCC among people with epilepsy (PWE). Overall, 565 articles were retrieved and 290 (51%) selected for further analysis. After a second analytic phase, only 4.5% of articles, all of which used neuroimaging for the diagnosis of NCC, were reviewed. Only two studies, both from the US, estimated an incidence rate of NCC using hospital discharge data. The prevalence of NCC in a random sample of village residents was reported from one study where 9.1% of the population harboured brain lesions of NCC. The proportion of NCC among different study populations varied widely. However, the proportion of NCC in PWE was a lot more consistent. The pooled estimate for this population was 29.0% (95%CI: 22.9%–35.5%). These results were not sensitive to the inclusion or exclusion of any particular study. Conclusion/Significance Only one study has estimated the prevalence of NCC in a random sample of all residents. Hence, the prevalence of NCC worldwide remains unknown. However, the pooled estimate for the proportion of NCC among PWE was very robust and could be used, in conjunction with estimates of the prevalence and incidence of epilepsy, to estimate this component of the burden of NCC in endemic areas. The previously recommended guidelines for the diagnostic process and for declaring NCC an international reportable disease would improve the knowledge on the global frequency of NCC. PMID:21072231

  5. A systematic review of the frequency of neurocyticercosis with a focus on people with epilepsy.

    PubMed

    Ndimubanzi, Patrick C; Carabin, Hélène; Budke, Christine M; Nguyen, Hai; Qian, Ying-Jun; Rainwater, Elizabeth; Dickey, Mary; Reynolds, Stephanie; Stoner, Julie A

    2010-11-02

    The objective of this study is to conduct a systematic review of studies reporting the frequency of neurocysticercosis (NCC) worldwide. PubMed, Commonwealth Agricultural Bureau (CAB) abstracts and 23 international databases were systematically searched for articles published from January 1, 1990 to June 1, 2008. Articles were evaluated for inclusion by at least two researchers focusing on study design and methods. Data were extracted independently using standardized forms. A random-effects binomial model was used to estimate the proportion of NCC among people with epilepsy (PWE). Overall, 565 articles were retrieved and 290 (51%) selected for further analysis. After a second analytic phase, only 4.5% of articles, all of which used neuroimaging for the diagnosis of NCC, were reviewed. Only two studies, both from the US, estimated an incidence rate of NCC using hospital discharge data. The prevalence of NCC in a random sample of village residents was reported from one study where 9.1% of the population harboured brain lesions of NCC. The proportion of NCC among different study populations varied widely. However, the proportion of NCC in PWE was a lot more consistent. The pooled estimate for this population was 29.0% (95%CI: 22.9%-35.5%). These results were not sensitive to the inclusion or exclusion of any particular study. Only one study has estimated the prevalence of NCC in a random sample of all residents. Hence, the prevalence of NCC worldwide remains unknown. However, the pooled estimate for the proportion of NCC among PWE was very robust and could be used, in conjunction with estimates of the prevalence and incidence of epilepsy, to estimate this component of the burden of NCC in endemic areas. The previously recommended guidelines for the diagnostic process and for declaring NCC an international reportable disease would improve the knowledge on the global frequency of NCC.

  6. Voluntary medical male circumcision: a qualitative study exploring the challenges of costing demand creation in eastern and southern Africa.

    PubMed

    Bertrand, Jane T; Njeuhmeli, Emmanuel; Forsythe, Steven; Mattison, Sarah K; Mahler, Hally; Hankins, Catherine A

    2011-01-01

    This paper proposes an approach to estimating the costs of demand creation for voluntary medical male circumcision (VMMC) scale-up in 13 countries of eastern and southern Africa. It addresses two key questions: (1) what are the elements of a standardized package for demand creation? And (2) what challenges exist and must be taken into account in estimating the costs of demand creation? We conducted a key informant study on VMMC demand creation using purposive sampling to recruit seven people who provide technical assistance to government programs and manage budgets for VMMC demand creation. Key informants provided their views on the important elements of VMMC demand creation and the most effective funding allocations across different types of communication approaches (e.g., mass media, small media, outreach/mobilization). The key finding was the wide range of views, suggesting that a standard package of core demand creation elements would not be universally applicable. This underscored the importance of tailoring demand creation strategies and estimates to specific country contexts before estimating costs. The key informant interviews, supplemented by the researchers' field experience, identified these issues to be addressed in future costing exercises: variations in the cost of VMMC demand creation activities by country and program, decisions about the quality and comprehensiveness of programming, and lack of data on critical elements needed to "trigger the decision" among eligible men. Based on this study's findings, we propose a seven-step methodological approach to estimate the cost of VMMC scale-up in a priority country, based on our key assumptions. However, further work is needed to better understand core components of a demand creation package and how to cost them. Notwithstanding the methodological challenges, estimating the cost of demand creation remains an essential element in deriving estimates of the total costs for VMMC scale-up in eastern and southern Africa.

  7. Voluntary Medical Male Circumcision: A Qualitative Study Exploring the Challenges of Costing Demand Creation in Eastern and Southern Africa

    PubMed Central

    Bertrand, Jane T.; Njeuhmeli, Emmanuel; Forsythe, Steven; Mattison, Sarah K.; Mahler, Hally; Hankins, Catherine A.

    2011-01-01

    Background This paper proposes an approach to estimating the costs of demand creation for voluntary medical male circumcision (VMMC) scale-up in 13 countries of eastern and southern Africa. It addresses two key questions: (1) what are the elements of a standardized package for demand creation? And (2) what challenges exist and must be taken into account in estimating the costs of demand creation? Methods and Findings We conducted a key informant study on VMMC demand creation using purposive sampling to recruit seven people who provide technical assistance to government programs and manage budgets for VMMC demand creation. Key informants provided their views on the important elements of VMMC demand creation and the most effective funding allocations across different types of communication approaches (e.g., mass media, small media, outreach/mobilization). The key finding was the wide range of views, suggesting that a standard package of core demand creation elements would not be universally applicable. This underscored the importance of tailoring demand creation strategies and estimates to specific country contexts before estimating costs. The key informant interviews, supplemented by the researchers' field experience, identified these issues to be addressed in future costing exercises: variations in the cost of VMMC demand creation activities by country and program, decisions about the quality and comprehensiveness of programming, and lack of data on critical elements needed to “trigger the decision” among eligible men. Conclusions Based on this study's findings, we propose a seven-step methodological approach to estimate the cost of VMMC scale-up in a priority country, based on our key assumptions. However, further work is needed to better understand core components of a demand creation package and how to cost them. Notwithstanding the methodological challenges, estimating the cost of demand creation remains an essential element in deriving estimates of the total costs for VMMC scale-up in eastern and southern Africa. PMID:22140450

  8. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  9. Transition from order to chaos, and density limit, in magnetized plasmas.

    PubMed

    Carati, A; Zuin, M; Maiocchi, A; Marino, M; Martines, E; Galgani, L

    2012-09-01

    It is known that a plasma in a magnetic field, conceived microscopically as a system of point charges, can exist in a magnetized state, and thus remain confined, inasmuch as it is in an ordered state of motion, with the charged particles performing gyrational motions transverse to the field. Here, we give an estimate of a threshold, beyond which transverse motions become chaotic, the electrons being unable to perform even one gyration, so that a breakdown should occur, with complete loss of confinement. The estimate is obtained by the methods of perturbation theory, taking as perturbing force acting on each electron that due to the so-called microfield, i.e., the electric field produced by all the other charges. We first obtain a general relation for the threshold, which involves the fluctuations of the microfield. Then, taking for such fluctuations, the formula given by Iglesias, Lebowitz, and MacGowan for the model of a one component plasma with neutralizing background, we obtain a definite formula for the threshold, which corresponds to a density limit increasing as the square of the imposed magnetic field. Such a theoretical density limit is found to fit pretty well the empirical data for collapses of fusion machines.

  10. Estimating Angle-of-Arrival and Time-of-Flight for Multipath Components Using WiFi Channel State Information.

    PubMed

    Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil

    2018-05-29

    Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.

  11. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  12. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The contributions of three major gasoline blending components (reformate, alkylate and cracked gasoline) to potential environmental impacts are assessed. This study estimates losses of the gasoline blending components due to evaporation and leaks through their life cycle, from pe...

  13. Practical aspects of estimating energy components in rodents

    PubMed Central

    van Klinken, Jan B.; van den Berg, Sjoerd A. A.; van Dijk, Ko Willems

    2013-01-01

    Recently there has been an increasing interest in exploiting computational and statistical techniques for the purpose of component analysis of indirect calorimetry data. Using these methods it becomes possible to dissect daily energy expenditure into its components and to assess the dynamic response of the resting metabolic rate (RMR) to nutritional and pharmacological manipulations. To perform robust component analysis, however, is not straightforward and typically requires the tuning of parameters and the preprocessing of data. Moreover the degree of accuracy that can be attained by these methods depends on the configuration of the system, which must be properly taken into account when setting up experimental studies. Here, we review the methods of Kalman filtering, linear, and penalized spline regression, and minimal energy expenditure estimation in the context of component analysis and discuss their results on high resolution datasets from mice and rats. In addition, we investigate the effect of the sample time, the accuracy of the activity sensor, and the washout time of the chamber on the estimation accuracy. We found that on the high resolution data there was a strong correlation between the results of Kalman filtering and penalized spline (P-spline) regression, except for the activity respiratory quotient (RQ). For low resolution data the basal metabolic rate (BMR) and resting RQ could still be estimated accurately with P-spline regression, having a strong correlation with the high resolution estimate (R2 > 0.997; sample time of 9 min). In contrast, the thermic effect of food (TEF) and activity related energy expenditure (AEE) were more sensitive to a reduction in the sample rate (R2 > 0.97). In conclusion, for component analysis on data generated by single channel systems with continuous data acquisition both Kalman filtering and P-spline regression can be used, while for low resolution data from multichannel systems P-spline regression gives more robust results. PMID:23641217

  14. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    PubMed

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  15. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  16. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  17. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  18. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  19. Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-09-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in accumulation rate, whereas amplitude modulation identifies eccentricity-modulated precession. The functioning of the time-variant sinusoidal model is illustrated and validated using a synthetic insolation signal. The new modeling approach is tested on two case studies: (1) a Pliocene-Pleistocene benthic δ18O record from Ocean Drilling Program (ODP) Site 846 and (2) a Danian magnetic susceptibility record from the Contessa Highway section, Gubbio, Italy.

  20. Gravity Waves characteristics and their impact on turbulent transport above an Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Cava, Daniela; Giostra, Umberto; Katul, Gabriel

    2016-04-01

    Turbulence within the stable boundary layer (SBL) remains a ubiquitous feature of many geophysical flows, especially over glaciers and ice-sheets. Although numerous studies have investigated various aspects of the boundary layer motion during stable atmospheric conditions, a unified picture of turbulent transport within the SBL remains elusive. In a strongly stratified SBL, turbulence generation is frequently associated with interactions with sub-meso scale motions that are often a combination of gravity waves (GWs) and horizontal modes. While some progress has been made in the inclusion of GW parameterisation within global models, description and parameterisation of the turbulence-wave interaction remain an open question. The discrimination between waves and turbulence is a focal point needed to make progress as these two motions have different properties with regards to heat, moisture and pollutant transport. In fact, the occurrence of GWs can cause significant differences and ambiguities in the interpretation of turbulence statistics and fluxes if not a priori filtered from the analysis. In this work, the characteristics of GW and their impact on turbulent statistics were investigated using wind velocity components and scalars collected above an Antarctic Ice sheet during an Austral Summer. Antarctica is an ideal location for exploring the characteristics of GW because of persistent conditions of strongly stable atmospheric stability in the lower troposphere. Periods dominated by wavy motions have been identified by analysing time series measured by fast response instrumentation. The GWs nature and features have been investigated using Fourier cross-spectral indicators. The detected waves were frequently characterised by variable amplitude and period; moreover, they often produced non-stationarity and large intermittency in turbulent fluctuations that can significantly alter the estimation of turbulence statistics in general and fluxes in particular. A multi-resolution decomposition based on the Haar wavelet has been applied to separate gravity waves from turbulent fluctuations in case of a sufficiently defined spectral gap. Statistics computed after removing wavy disturbances highlight the large impact of gravity waves on second order turbulent quantities. One of the most impacted parameters is turbulent kinetic energy, in particular in the longitudinal and lateral components. The effect of wave activity on momentum and scalar fluxes is more complex because waves can produce large errors in sign and magnitude of computed turbulent fluxes or they themselves can contribute to intermittent turbulent mixing. The proposed filtering procedure based on the multi-resolution decomposition restored the correct sign in the turbulent sensible heat flux values. These findings highlight the significance of a correct evaluation of the impact of wave components when the goal is determining the turbulent transport component of mass and energy in the SBL.

Top