NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Vaughan, William W.
1998-01-01
A summary is presented of basic lightning characteristics/criteria for current and future NASA aerospace vehicles. The paper estimates the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating probabilities of launch vehicles/objects being struck by lightning. This paper presents these results.
Lightning Strike Peak Current Probabilities as Related to Space Shuttle Operations
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Vaughan, William W.
2000-01-01
A summary is presented of basic lightning characteristics/criteria applicable to current and future aerospace vehicles. The paper provides estimates on the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating the probabilities of launch vehicles/objects being struck by lightning. This paper presents a summary of these results.
State of Washington Population Trends, 1975. Washington State Information Report.
ERIC Educational Resources Information Center
Washington State Office of Program Planning and Fiscal Management, Olympia.
As of April 1, 1975, Washington's population was estimated at 3,494,124--an increase of 80,874 since 1970. Prepared yearly, this report presents tabular data pertaining to: (1) current April 1 estimates for cities, towns, and counties; (2) current decline in household size; (3) the use of postal vacancy surveys in estimating vacancy rates; and (4)…
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
Estimating the prevalence of infertility in Canada
Bushnik, Tracey; Cook, Jocelynn L.; Yuzpe, A. Albert; Tough, Suzanne; Collins, John
2012-01-01
BACKGROUND Over the past 10 years, there has been a significant increase in the use of assisted reproductive technologies in Canada, however, little is known about the overall prevalence of infertility in the population. The purpose of the present study was to estimate the prevalence of current infertility in Canada according to three definitions of the risk of conception. METHODS Data from the infertility component of the 2009–2010 Canadian Community Health Survey were analyzed for married and common-law couples with a female partner aged 18–44. The three definitions of the risk of conception were derived sequentially starting with birth control use in the previous 12 months, adding reported sexual intercourse in the previous 12 months, then pregnancy intent. Prevalence and odds ratios of current infertility were estimated by selected characteristics. RESULTS Estimates of the prevalence of current infertility ranged from 11.5% (95% CI 10.2, 12.9) to 15.7% (95% CI 14.2, 17.4). Each estimate represented an increase in current infertility prevalence in Canada when compared with previous national estimates. Couples with lower parity (0 or 1 child) had significantly higher odds of experiencing current infertility when the female partner was aged 35–44 years versus 18–34 years. Lower odds of experiencing current infertility were observed for multiparous couples regardless of age group of the female partner, when compared with nulliparous couples. CONCLUSIONS The present study suggests that the prevalence of current infertility has increased since the last time it was measured in Canada, and is associated with the age of the female partner and parity. PMID:22258658
Methods of albumin estimation in clinical biochemistry: Past, present, and future.
Kumar, Deepak; Banerjee, Dibyajyoti
2017-06-01
Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical properties of alternative national forest inventory area estimators
Francis Roesch; John Coulston; Andrew D. Hill
2012-01-01
The statistical properties of potential estimators of forest area for the USDA Forest Service's Forest Inventory and Analysis (FIA) program are presented and discussed. The current FIA area estimator is compared and contrasted with a weighted mean estimator and an estimator based on the Polya posterior, in the presence of nonresponse. Estimator optimality is...
Analysis and Assessment of Peak Lightning Current Probabilities at the NASA Kennedy Space Center
NASA Technical Reports Server (NTRS)
Johnson, D. L.; Vaughan, W. W.
1999-01-01
This technical memorandum presents a summary by the Electromagnetics and Aerospace Environments Branch at the Marshall Space Flight Center of lightning characteristics and lightning criteria for the protection of aerospace vehicles. Probability estimates are included for certain lightning strikes (peak currents of 200, 100, and 50 kA) applicable to the National Aeronautics and Space Administration Space Shuttle at the Kennedy Space Center, Florida, during rollout, on-pad, and boost/launch phases. Results of an extensive literature search to compile information on this subject are presented in order to answer key questions posed by the Space Shuttle Program Office at the Johnson Space Center concerning peak lightning current probabilities if a vehicle is hit by a lightning cloud-to-ground stroke. Vehicle-triggered lightning probability estimates for the aforementioned peak currents are still being worked. Section 4.5, however, does provide some insight on estimating these same peaks.
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
CONTRIBUTIONS OF CURRENT YEAR PHOTOSYNTHATE TO FINE ROOTS ESTIMATED USING A 13C-DEPLETED CO2 SOURCE
The quantification of root turnover is necessary for a complete understanding of plant carbon (C) budgets, especially in terms of impacts of global climate change. To improve estimates of root turnover, we present a method to distinguish current- from prior-year allocation of ca...
COMPUTER SUPPORT SYSTEMS FOR ESTIMATING CHEMICAL TOXICITY: PRESENT CAPABILITIES AND FUTURE TRENDS
Computer Support Systems for Estimating Chemical Toxicity: Present Capabilities and Future Trends
A wide variety of computer-based artificial intelligence (AI) and decision support systems exist currently to aid in the assessment of toxicity for environmental chemicals. T...
Valuing Non-CO2 GHG Emission Changes in Benefit-Cost ...
The climate impacts of greenhouse gas (GHG) emissions impose social costs on society. To date, EPA has not had an approach to estimate the economic benefits of reducing emissions of non-CO2 GHGs (or the costs of increasing them) that is consistent with the methodology underlying the U.S. Government’s current estimates of the social cost of carbon (SCC). A recently published paper presents estimates of the social cost of methane that are consistent with the SCC estimates. The Agency is seeking review of the potential application of these new benefit estimates to benefit cost analysis in relation to current practice in this area. The goal of this project is to improve upon the current treatment of non-CO2 GHG emission impacts in benefit-cost analysis.
Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee
Stanley J. Zarnoch; Alexander Clark; Ray A. Souter
2003-01-01
Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....
Estimated use of water in the United States in 1995
Solley, Wayne B.; Pierce, Robert R.; Perlman, Howard A.
1998-01-01
The purpose of this report is to present consistent and current water-use estimates by state and water-resources region for the United States, Puerto Rico, the U.S. Virgin Islands, and the District of Columbia. Estimates of water withdrawn from surface- and ground-water sources, estimates of consumptive use, and estimates of instream use and wastewater releases during 1995 are presented in this report. This report discusses eight categories of offstream water use--public supply, domestic, commercial, irrigation, livestock, industrial, mining, and thermoelectric power--and one category of instream use: hydroelectric power.
Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L
2017-10-01
Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.
Estimating Slope and Level Change in N = 1 Designs
ERIC Educational Resources Information Center
Solanas, Antonio; Manolov, Rumen; Onghena, Patrick
2010-01-01
The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series before assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and…
A self-sensing active magnetic bearing based on a direct current measurement approach.
Niemann, Andries C; van Schoor, George; du Rand, Carel P
2013-09-11
Active magnetic bearings (AMBs) have become a key technology in various industrial applications. Self-sensing AMBs provide an integrated sensorless solution for position estimation, consolidating the sensing and actuating functions into a single electromagnetic transducer. The approach aims to reduce possible hardware failure points, production costs, and system complexity. Despite these advantages, self-sensing methods must address various technical challenges to maximize the performance thereof. This paper presents the direct current measurement (DCM) approach for self-sensing AMBs, denoting the direct measurement of the current ripple component. In AMB systems, switching power amplifiers (PAs) modulate the rotor position information onto the current waveform. Demodulation self-sensing techniques then use bandpass and lowpass filters to estimate the rotor position from the voltage and current signals. However, the additional phase-shift introduced by these filters results in lower stability margins. The DCM approach utilizes a novel PA switching method that directly measures the current ripple to obtain duty-cycle invariant position estimates. Demodulation filters are largely excluded to minimize additional phase-shift in the position estimates. Basic functionality and performance of the proposed self-sensing approach are demonstrated via a transient simulation model as well as a high current (10 A) experimental system. A digital implementation of amplitude modulation self-sensing serves as a comparative estimator.
Lifetime Earnings Estimates for Men and Women in the United States: 1979.
ERIC Educational Resources Information Center
Burkhead, Dan L.
1983-01-01
This report presents estimates of expected lifetime earnings based on data collected in the March Current Population Survey by age, sex, and educational attainment for 1978, 1979, and 1980. The text describes the data tables and charts, methodology, and limitations of the data. The eight figures and five detailed tables present lifetime earning…
Current drive by helicon waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Manash Kumar; Bora, Dhiraj; ITER Organization, Cadarache Centre-building 519, 131008 St. Paul-Lez-Durance
2009-01-01
Helicity in the dynamo field components of helicon wave is examined during the novel study of wave induced helicity current drive. Strong poloidal asymmetry in the wave magnetic field components is observed during helicon discharges formed in a toroidal vacuum chamber of small aspect ratio. High frequency regime is chosen to increase the phase velocity of helicon waves which in turn minimizes the resonant wave-particle interactions and enhances the contribution of the nonresonant current drive mechanisms. Owing to the strong poloidal asymmetry in the wave magnetic field structures, plasma current is driven mostly by the dynamo-electric-field, which arise due tomore » the wave helicity injection by helicon waves. Small, yet finite contribution from the suppressed wave-particle resonance cannot be ruled out in the operational regime examined. A brief discussion on the parametric dependence of plasma current along with numerical estimations of nonresonant components is presented. A close agreement between the numerical estimation and measured plasma current magnitude is obtained during the present investigation.« less
Marginal regression approach for additive hazards models with clustered current status data.
Su, Pei-Fang; Chi, Yunchan
2014-01-15
Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated. Copyright © 2013 John Wiley & Sons, Ltd.
REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING CANCER RISKS FROM EXPOSURE TO IONIZING RADIATION
In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainti...
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Jim G.
2013-03-27
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.
Data-Rate Estimation for Autonomous Receiver Operation
NASA Technical Reports Server (NTRS)
Tkacenko, A.; Simon, M. K.
2005-01-01
In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.
Usage of combined airman certification by active airmen : an active airman population estimate.
DOT National Transportation Integrated Search
1968-03-01
The analysis presents an approach to the estimation of a current airman population by broad usage categories of air transport purposes, commercial purposes, private or student purposes, and unknown purposes. Observed usage relationships among sampled...
Gregory M. Filip
1989-01-01
In 1979, an equation was developed to estimate the percentage of current and future timber volume loss due to stem decay caused by Heterobasidion annosum and other fungi in advance regeneration stands of grand and white fir in eastern Oregon and Washington. Methods for using and testing the equation are presented. Extensive testing in 1988 showed the...
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun
NASA Astrophysics Data System (ADS)
Fursyak, Yu. A.; Abramenko, V. I.
2017-12-01
Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.
Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue
2017-01-01
The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.
The unrecognized burden of typhoid fever.
Obaro, Stephen K; Iroh Tam, Pui-Ying; Mintz, Eric Daniel
2017-03-01
Typhoid fever (TF), caused by Salmonella enterica serovar Typhi, is the most common cause of enteric fever, responsible for an estimated 129,000 deaths and more than 11 million cases annually. Although several reviews have provided global and regional TF disease burden estimates, major gaps in our understanding of TF epidemiology remain. Areas covered: We provide an overview of the gaps in current estimates of TF disease burden and offer suggestions for addressing them, so that affected communities can receive the full potential of disease prevention offered by vaccination and water, sanitation, and hygiene interventions. Expert commentary: Current disease burden estimates for TF do not capture cases from certain host populations, nor those with atypical presentations of TF, which may lead to substantial underestimation of TF cases and deaths. These knowledge gaps pose major obstacles to the informed use of current and new generation typhoid vaccines.
Breaking the Illiteracy Bonds.
ERIC Educational Resources Information Center
Taylor, Susan Champlin
1988-01-01
Describes current research being done on literacy and discusses reasons behind the varied estimates of adult literacy rates. Presents several case studies of adults who are currently or have been in reading programs, particularly older adults. Describes the work of major literacy education providers. (CH)
Low Cost Sensors-Current Capabilities and Gaps
1. Present the findings from the a recent technology review of gas and particulate phase sensors 2. Focus on the lower-cost sensors 3. Discuss current capabilities, estimated range of measurement, selectivity, deployment platforms, response time, and expected range of acceptabl...
ERIC Educational Resources Information Center
Thissen, David; Wainer, Howard
Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The NSC is organ...
Quantitative software models for the estimation of cost, size, and defects
NASA Technical Reports Server (NTRS)
Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.
2002-01-01
The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.
Numerical approach for ECT by using boundary element method with Laplace transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enokizono, M.; Todaka, T.; Shibao, K.
1997-03-01
This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.
Physics-of-Failure Approach to Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.
2017-01-01
As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.
Current and estimated future atmospheric nitrogen loads to the Chesapeake Bay Watershed
Nitrogen deposition for CMAQ scenarios in 2011, 2017, 2023, 2028, and a 2048-2050 RCP 4.5 climate scenario will be presented for the watershed and tidal waters. Comparisons will be made with the 2017 Airshed Model to the previous 2010 Airshed Model estimates. In addition, atmosph...
The report presents estimates of the performance and cost of both powdered activated carbon (PAC) and multipollutant control technologies that may be useful in controlling mercury emissions. Based on currently available data, cost estimates for PAC injection range are 0.03-3.096 ...
NASA Astrophysics Data System (ADS)
Forsyth, C.; Rae, I. J.; Mann, I. R.; Pakhotin, I. P.
2017-03-01
Field-aligned currents (FACs) are a fundamental component of coupled solar wind-magnetosphere-ionosphere. By assuming that FACs can be approximated by stationary infinite current sheets that do not change on the spacecraft crossing time, single-spacecraft magnetic field measurements can be used to estimate the currents flowing in space. By combining data from multiple spacecraft on similar orbits, these stationarity assumptions can be tested. In this technical report, we present a new technique that combines cross correlation and linear fitting of multiple spacecraft measurements to determine the reliability of the FAC estimates. We show that this technique can identify those intervals in which the currents estimated from single-spacecraft techniques are both well correlated and have similar amplitudes, thus meeting the spatial and temporal stationarity requirements. Using data from European Space Agency's Swarm mission from 2014 to 2015, we show that larger-scale currents (>450 km) are well correlated and have a one-to-one fit up to 50% of the time, whereas small-scale (<50 km) currents show similar amplitudes only 1% of the time despite there being a good correlation 18% of the time. It is thus imperative to examine both the correlation and amplitude of the calculated FACs in order to assess both the validity of the underlying assumptions and hence ultimately the reliability of such single-spacecraft FAC estimates.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
NASA Astrophysics Data System (ADS)
Heckman, S.
2015-12-01
Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.
Estimates of Present and Future Flood Risk in the Conterminous United States
NASA Astrophysics Data System (ADS)
Wing, O.; Bates, P. D.; Smith, A.; Sampson, C. C.; Johnson, K.; Fargione, J.; Morefield, P.
2017-12-01
Past attempts to estimate flood risk across the USA either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30m resolution model of the entire conterminous US (CONUS) with realistic flood physics to produce estimates of flood hazard which match to within 90% accuracy the skill of local models built with detailed data. Socio-economic data of commensurate resolution are combined with these flood depths to estimate current and future flood risk. Future population and land-use projections from the US Environmental Protection Agency (USEPA) are employed to indicate how flood risk might change through the 21st Century, while present-day estimates utilize the Federal Emergency Management Agency (FEMA) National Structure Inventory and a USEPA map of population distribution. Our data show that the total CONUS population currently exposed to serious flooding is 2.6 to 3.1 times higher than previous estimates; with nearly 41 million Americans living within the so-called 1 in 100-year (1% annual probability) floodplain, compared to only 13 million according to FEMA flood maps. Moreover, socio-economic change alone leads to significant future increases in flood exposure and risk, even before climate change impacts are accounted for. The share of the population living on the 1 in 100-year floodplain is projected to increase from 13.3% in the present-day to 15.6 - 15.8% in 2050 and 16.4 - 16.8% in 2100. The area of developed land within this floodplain, currently at 150,000 km2, is likely to increase by 37 - 72% in 2100 based on the scenarios selected. 5.5 trillion worth of assets currently lie on the 1% floodplain; we project that by 2100 this number will exceed 10 trillion. With this detailed spatial information on present-day flood risk, federal and state agencies can take appropriate action to mitigate losses. Use of USEPA population and land-use projections mean that particular attention can be paid to floodplains where development is projected. Steps to conserve such areas or ensure adequate defenses are in place could avoid the exposure of trillions of dollars of assets, not to mention the human suffering caused by loss of property and life.
A Secure Trust Establishment Scheme for Wireless Sensor Networks
Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob
2014-01-01
Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471
NASA Astrophysics Data System (ADS)
Kosch, M. J.; Nielsen, E.
Two bistatic VHF radar systems, STARE and SABRE, have been employed to estimate ionospheric electric fields in the geomagnetic latitude range 61.1 - 69.3° (geographic latitude range 63.8 - 72.6°) over northern Scandinavia. 173 days of good backscatter from all four radars have been analysed during the period 1982 to 1986, from which the average ionospheric divergence electric field versus latitude and time is calculated. The average magnetic field-aligned currents are computed using an AE-dependent empirical model of the ionospheric conductance. Statistical Birkeland current estimates are presented for high and low values of the Kp and AE indices as well as positive and negative orientations of the IMF B z component. The results compare very favourably to other ground-based and satellite measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P.C.
1982-10-01
Given the potential significance of northern ecosystems to the global carbon budget it is critical to estimate the current carbon balance of these ecosystems as precisely as possible, to improve estimates of the future carbon balance if world climates change, and to assess the range of certainty associated with these estimates. As a first step toward quantifying some of the potential changes, a workshop with tundra and taiga ecologists and soil scientists was held in San Diego in March 1980. The first part of this report summarizes the conclusions of this workshop with regard to the estimate of the currentmore » areal extent and carbon content of the circumpolar arctic and the taiga, current rates of carbon accumulation in the peat in the arctic and the taiga, and predicted future carbon accumulation rates based on the present understanding of controlling processes and on the understanding of past climates and vegetation. This report presents a finer resolution of areal extents, standing crops, and production rates than was possible previously because of recent syntheses of data from the International Biological Program and current studies in the northern ecosystems, some of which have not yet been published. This recent information changes most of the earlier estimates of carbon content and affects predictions of the effect of climate change. The second part of this report outlines research needed to fill major gaps in the understanding of the role of northern ecosystems in global climate change.« less
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
The operational processing of wind estimates from cloud motions: Past, present and future
NASA Technical Reports Server (NTRS)
Novak, C.; Young, M.
1977-01-01
Current NESS winds operations provide approximately 1800 high quality wind estimates per day to about twenty domestic and foreign users. This marked improvement in NESS winds operations was the result of computer techniques development which began in 1969 to streamline and improve operational procedures. In addition, the launch of the SMS-1 satellite in 1974, the first in the second generation of geostationary spacecraft, provided an improved source of visible and infrared scanner data for the extraction of wind estimates. Currently, operational winds processing at NESS is accomplished by the automated and manual analyses of infrared data from two geostationary spacecraft. This system uses data from SMS-2 and GOES-1 to produce wind estimates valid for 00Z, 12Z and 18Z synoptic times.
Estimating ecosystem carbon stocks at Redwood National and State Parks
van Mantgem, Phillip J.; Madej, Mary Ann; Seney, Joseph; Deshais, Janelle
2013-01-01
Accounting for ecosystem carbon is increasingly important for park managers. In this case study we present our efforts to estimate carbon stocks and the effects of management on carbon stocks for Redwood National and State Parks in northern California. Using currently available information, we estimate that on average these parks’ soils contain approximately 89 tons of carbon per acre (200 Mg C per ha), while vegetation contains about 130 tons C per acre (300 Mg C per ha). estoration activities at the parks (logging-road removal, second-growth forest management) were shown to initially reduce ecosystem carbon, but may provide for enhanced ecosystem carbon storage over the long term. We highlight currently available tools that could be used to estimate ecosystem carbon at other units of the National Park System.
State estimation for spacecraft power systems
NASA Technical Reports Server (NTRS)
Williamson, Susan H.; Sheble, Gerald B.
1990-01-01
A state estimator appropriate for spacecraft power systems is presented. Phasor voltage and current measurements are used to determine the system state. A weighted least squares algorithm with a multireference transmission cable model is used. Bad data are identified and resolved. Once the bad data have been identified, they are removed from the measurement set and the system state can be estimated from the remaining data. An observability analysis is performed on the remaining measurements to determine if the system state can be found from the reduced measurement set. An example of the algorithm for a sample spacecraft power system is presented.
Revisiting the social cost of carbon.
Nordhaus, William D
2017-02-14
The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO 2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.
Revisiting the social cost of carbon
NASA Astrophysics Data System (ADS)
Nordhaus, William D.
2017-02-01
The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is 31 per ton of CO2 in 2010 US for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.
Decoupling Intensity Radiated by the Emitter in Distance Estimation from Camera to IR Emitter
Cano-García, Angel E.; Galilea, José Luis Lázaro; Fernández, Pedro; Infante, Arturo Luis; Pompa-Chacón, Yamilet; Vázquez, Carlos Andrés Luna
2013-01-01
Various models using radiometric approach have been proposed to solve the problem of estimating the distance between a camera and an infrared emitter diode (IRED). They depend directly on the radiant intensity of the emitter, set by the IRED bias current. As is known, this current presents a drift with temperature, which will be transferred to the distance estimation method. This paper proposes an alternative approach to remove temperature drift in the distance estimation method by eliminating the dependence on radiant intensity. The main aim was to use the relative accumulated energy together with other defined models, such as the zeroth-frequency component of the FFT of the IRED image and the standard deviation of pixel gray level intensities in the region of interest containing the IRED image. By using the abovementioned models, an expression free of IRED radiant intensity was obtained. Furthermore, the final model permitted simultaneous estimation of the distance between the IRED and the camera and the IRED orientation angle. The alternative presented in this paper gave a 3% maximum relative error over a range of distances up to 3 m. PMID:23727954
Estimating Hedonic Price Indices for Ground Vehicles
2015-06-01
I N S T I T U T E F O R D E F E N S E A N A L Y S E S Estimating Hedonic Price Indices for Ground Vehicles (Presentation) David M. Tate Stanley...gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding this burden estimate or any...currently valid OMB control number. 1. REPORT DATE JUN 2015 2. REPORT TYPE 3. DATES COVERED 4. TITLE AND SUBTITLE Estimating Hedonic Price
pathChirp: Efficient Available Bandwidth Estimation for Network Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottrell, Les
2003-04-30
This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less
River runoff estimates based on remotely sensed surface velocities
NASA Astrophysics Data System (ADS)
Grünler, Steffen; Stammer, Detlef; Romeiser, Roland
2010-05-01
One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, will permit ATI measurements in an experimental mode. Based on numerical simulations, we present findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated. A sampling strategy for river runoff estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test site. High-resolution three-dimensional current fields in the Elbe river (Germany) from a numerical model are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. Addressing the problem of aliasing we removed tidal signals from the sampling data. Discharge estimates on the basis of measured surface current fields and river widths from TerraSAR-X are successfully simulated. The differences of the resulted net discharge estimate are between 30-55% for a required continuously observation period of one year. We discuss the applicability of the measuring strategies to a number of major rivers. Further we show results of runoff estimates by the retrieval of surface current fields by real TerraSAR-X ATI data (AS mode) for the Elbe river study area.
NASA Technical Reports Server (NTRS)
Canfield, Stephen
1999-01-01
This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.
Sixth Annual Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Lefferts, E. (Editor)
1981-01-01
Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.
State Estimates of Disability in America. Disability Statistics Report 3.
ERIC Educational Resources Information Center
LaPlante, Mitchell P.
This study presents and discusses existing data on disability by state, from the 1980 and 1990 censuses, the Current Population Survey (CPS), and the National Health Interview Survey (NHIS). The study used direct methods for states with large sample sizes and synthetic estimates for states with low sample sizes. The study's highlighted findings…
NASA Astrophysics Data System (ADS)
Tarao, Hiroo; Miyamoto, Hironobu; Korpinen, Leena; Hayashi, Noriyuki; Isaka, Katsuo
2016-06-01
Most results regarding induced current in the human body related to electric field dosimetry have been calculated under uniform field conditions. We have found in previous work that a contact current is a more suitable way to evaluate induced electric fields, even in the case of exposure to non-uniform fields. If the relationship between induced currents and external non-uniform fields can be understood, induced electric fields in nervous system tissues may be able to be estimated from measurements of ambient non-uniform fields. In the present paper, we numerically calculated the induced electric fields and currents in a human model by considering non-uniform fields based on distortion by a cubic conductor under an unperturbed electric field of 1 kV m-1 at 60 Hz. We investigated the relationship between a non-uniform external electric field with no human present and the induced current through the neck, and the relationship between the current through the neck and the induced electric fields in nervous system tissues such as the brain, heart, and spinal cord. The results showed that the current through the neck can be formulated by means of an external electric field at the central position of the human head, and the distance between the conductor and the human model. As expected, there is a strong correlation between the current through the neck and the induced electric fields in the nervous system tissues. The combination of these relationships indicates that induced electric fields in these tissues can be estimated solely by measurements of the external field at a point and the distance from the conductor.
Violence exposure among children with disabilities.
Sullivan, Patricia M
2009-06-01
The focus of this paper is children with disabilities exposed to a broad range of violence types including child maltreatment, domestic violence, community violence, and war and terrorism. Because disability research must be interpreted on the basis of the definitional paradigm employed, definitions of disability status and current prevalence estimates as a function of a given paradigm are initially considered. These disability paradigms include those used in federal, education, juvenile justice, and health care arenas. Current prevalence estimates of childhood disability in the U.S. are presented within the frameworks of these varying definitions of disability status in childhood. Summaries of research from 2000 to 2008 on the four types of violence victimization addressed among children with disabilities are presented and directions for future research suggested.
Propellant Management and Conditioning within the X-34 Main Propulsion System
NASA Technical Reports Server (NTRS)
Brown, T. M.; McDonald, J. P.; Hedayat, A.; Knight, K. C.; Champion, R. H., Jr.
1998-01-01
The X-34 hypersonic flight vehicle is currently under development by Orbital Sciences Corporation (Orbital). The Main Propulsion ystem as been designed around the liquid propellant Fastrac rocket engine currently under development at NASA Marshall Space Flight Center. This paper presents analyses of the MPS subsystems used to manage the liquid propellants. These subsystems include the propellant tanks, the tank vent/relief subsystem, and the dump/fill/drain subsystem. Analyses include LOX tank chill and fill time estimates, LOX boil-off estimates, propellant conditioning simulations, and transient propellant dump simulations.
Estimation of electric fields and current from ground-based magnetometer data
NASA Technical Reports Server (NTRS)
Kamide, Y.; Richmond, A. D.
1984-01-01
Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.
Estimating Advective Near-surface Currents from Ocean Color Satellite Images
2015-01-01
of surface current information. The present study uses the sequential ocean color products provided by the Geostationary Ocean Color Imager (GOCI) and...on the SuomiNational Polar-Orbiting Partner- ship (S-NPP) satellite. The GOCI is the world’s first geostationary orbit satellite sensor over the...used to extract the near-surface currents by the MCC algorithm. We not only demonstrate the retrieval of currents from the geostationary satellite ocean
River Runoff Estimates on the Basis of Satellite-Derived Surface Currents and Water Levels
NASA Astrophysics Data System (ADS)
Gruenler, S.; Romeiser, R.; Stammer, D.
2007-12-01
One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, permits current measurements by ATI in an experimental mode of operation. Based on numerical simulations, we present first findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated and a dedicated data synthesis system for river discharge estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test sites. High-resolution three- dimensional current fields in the Elbe river (Germany) from a numerical model of the German Federal Waterways Engineering and Research Institute (BAW) are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. For example, runoff estimates on the basis of measured surface current fields and river widths from TerraSAR-X and water levels from radar altimetry are simulated. Despite the simplicity of some of the applied methods, the results provide quite comprehensive pictures of the Elbe river runoff dynamics. Although the satellite-based river runoff estimates exhibit a lower accuracy in comparison to traditional gauge measurements, the proposed measuring strategies are quite promising for the monitoring of river discharge dynamics in regions where only sparse in-situ measurements are available. We discuss the applicability to a number of major rivers around the world.
Gas Composition Sensing Using Carbon Nanotube Arrays
NASA Technical Reports Server (NTRS)
Li, Jing; Meyyappan, Meyya
2012-01-01
This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.
ERIC Educational Resources Information Center
HATFIELD, ELIZABETH M.
CURRENT ESTIMATES AND SOME TREND DATA ARE PRESENTED ON THE FOLLOWING SUBJECTS -- POPULATION GROWTH (1940-1960), PREVALENCE OF LEGAL BLINDNESS, NEW CASES OF LEGAL BLINDNESS, AGE DISTRIBUTION OF LEGALLY BLIND PERSONS, CAUSES OF LEGAL BLINDNESS, CHANGING PATTERNS IN CAUSES OF LEGAL BLINDNESS, CASES OF GLAUCOMA, SCHOOL CHILDREN NEEDING EYE CARE,…
Annual losses from disease in Pacific Northwest forests.
T.W Childs; K.R. Shea
1967-01-01
This report presents current estimates of annual disease impact on forest productivity of Oregon and Washington. It is concerned exclusively with losses of timber volumes and of potential timber growth in today's forests.Annual loss from disease in this region is estimated at 3,133 million board feet or 403 million cubic feet. This is about 13 percent...
Revisiting the social cost of carbon
Nordhaus, William D.
2017-01-01
The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources. PMID:28143934
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
NASA Astrophysics Data System (ADS)
Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung
2005-12-01
The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.
[Work quota setting and man-hour productivity estimation in pathologists].
Svistunov, V V; Makarov, S V; Makarova, A E
The paper considers the development and current state of the regulation of work quota setting and remuneration in pathologists. Reasoning from the current staff standards for morbid anatomy departments (units), the authors present a method to calculate the load of pathologists. The essence of the proposed method is demonstrated using a specific example.
Forest Resources of the United States, 1997
W. Brad Smith; John S. Vissage; David R. Darr; Raymond M. Sheffield
2001-01-01
Forest resource statistics from the 1987 Resources Planning Act (RPA) Assessment were updated to 1997 to provide current information on the Nation`s forests. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or State. Current resource data are analyzed and...
Forest Resources of the United States, 2007
W. Brad, tech. coord. Smith; Patrick D., data coord. Miles; Charles H., map coord. Perry; Scott A., Data CD coord. Pugh
2009-01-01
Forest resource statistics from the 2000 Resources Planning Act (RPA) Assessment were updated to provide current information on the Nation's forests. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or State. Current resource data and trends are analyzed...
A model of annular linear induction pumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Momozaki, Yoichi
2016-10-27
The present work explains how the magnetic field and the induced current are obtained when the distributed coils are powered by a 3 phase power supply. From the magnetic field and the induced current, the thrust and the induction losses in the pump can be calculated to estimate the pump performance.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
The design of nonlinear observers for wind turbine dynamic state and parameter estimation
NASA Astrophysics Data System (ADS)
Ritter, B.; Schild, A.; Feldt, M.; Konigorski, U.
2016-09-01
This contribution addresses the dynamic state and parameter estimation problem which arises with more advanced wind turbine controllers. These control devices need precise information about the system's current state to outperform conventional industrial controllers effectively. First, the necessity of a profound scientific treatment on nonlinear observers for wind turbine application is highlighted. Secondly, the full estimation problem is introduced and the variety of nonlinear filters is discussed. Finally, a tailored observer architecture is proposed and estimation results of an illustrative application example from a complex simulation set-up are presented.
Marella, Richard L.; Dixon, Joann F.
2015-09-18
The irrigated acreage estimated for Jackson County in 2014 (31,608) is about 47 percent higher than the 2012 estimated acreage published by the USDA (21,508 acres). The estimates of irrigated acreage field verified during 2014 for Calhoun and Gadsden Counties are also higher than those published by the USDA for 2012 (86 percent and 71 percent, respectively). In Calhoun County the USDA reported 1,647 irrigated acres while the current study estimated 3,060 acres, and in Gadsden County the USDA reported 2,650 acres while the current study estimated 4,547 acres. For Houston County the USDA-reported value of 9,138 acres in 2012 was 13 percent below the 10,333 acres field verified in the current study. Differences between the USDA 2012 values and 2014 field verified estimates in these two datasets may occur because (1) irrigated acreage for some specific crops increased or decreased substantially during the 2-year interval due to commodity prices or economic changes, (2) irrigated acreage calculated for the current study may be estimated high because irrigation was assumed if an irrigation system was present and therefore the acreage was counted as irrigated, when in fact that may not have been the case as some farmers may not have used their irrigation systems during this growing period even if they had a crop in the field, or (3) the amount of irrigated acreages published by the USDA for selected crops may be underestimated in some cases.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Migration plans and hours of work in Malaysia.
Gillin, E D; Sumner, D A
1985-01-01
"This article describes characteristics of prospective migrants in the Malaysian Family Life Survey and investigates how planning to move affects hours of work. [The authors] use ideas about intertemporal substitution...to discuss the response to temporary and permanent wage expectations on the part of potential migrants. [An] econometric section presents reduced-form estimates for wage rates and planned migration equations and two-stage least squares estimates for hours of work. Men currently planning a move were found to work fewer hours. Those originally planning only a temporary stay at their current location work more hours." excerpt
Saucedo-Espinosa, Mario A.; Lapizco-Encinas, Blanca H.
2016-01-01
Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.
System for estimating fatigue damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng
In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual risermore » components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.« less
ERIC Educational Resources Information Center
Kelava, Augustin; Nagengast, Benjamin
2012-01-01
Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent…
Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang
2016-01-01
Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183
Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang
2016-08-19
Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.
An overall estimation of losses caused by diseases in the Brazilian fish farms.
Tavares-Dias, Marcos; Martins, Maurício Laterça
2017-12-01
Parasitic and infectious diseases are common in finfish, but are difficult to accurately estimate the economic impacts on the production in a country with large dimensions like Brazil. The aim of this study was to estimate the costs caused by economic losses of finfish due to mortality by diseases in Brazil. A model for estimating the costs related to parasitic and bacterial diseases in farmed fish and an estimative of these economic impacts are presented. We used official data of production and mortality of finfish for rough estimation of economic losses. The losses herein presented are related to direct and indirect economic costs for freshwater farmed fish, which were estimated in US$ 84 million per year. Finally, it was possible to establish by the first time an estimative of overall losses in finfish production in Brazil using data available from production. Therefore, this current estimative must help researchers and policy makers to approximate the economic costs of diseases for fish farming industry, as well as for developing of public policies on the control measures of diseases and priority research lines.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
Biodiversity: past, present, and future
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1997-01-01
Data from the fossil record are used to illustrate biodiversity in the past and estimate modern biodiversity and loss. This data is used to compare current rates of extinction with past extinction events. Paleontologists are encouraged to use this data to understand the course and consequences of current losses and to share this knowledge with researchers interested in conservation and ecology.
Measurement of impulse current using polarimetric fiber optic sensor
NASA Astrophysics Data System (ADS)
Ginter, Mariusz
2017-08-01
In the paper the polarimetric current sensing solution used for measurements of high amplitude currents and short durations is presented. This type of sensor does not introduce additional resistance and inductance into the circuit, which is a desirable phenomenon in this type of measurement. The magneto element is a fiber optic coil made of spun fiber optic. The fiber in which the core is twisted around its axis is characterized by a small effect of interfering magnitudes, ie mechanical vibrations and pressure changes on the polarimeter. The presented polarimetric current sensor is completely fiber optic. Experimental results of a proposed sensor construction solution operating at 1550 nm and methods of elimination of influence values on the fiber optic current sensor were presented. The sensor was used to measure the impulse current. The generated current pulses are characterized by a duration of 23μs and amplitudes ranging from 1 to 3.5 kA. The currents in the discharge circuit are shown. The measurement uncertainty of the amplitude of the electric current in the range of measured impulses was determined and estimated to be no more than 2%.
Magnetospheric Multiscale (MMS) Mission Attitude Ground System Design
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Superfin, Emil; Raymond, Juan C.
2011-01-01
This paper presents an overview of the attitude ground system (AGS) currently under development for the Magnetospheric Multiscale (MMS) mission. The primary responsibilities for the MMS AGS are definitive attitude determination, validation of the onboard attitude filter, and computation of certain parameters needed to improve maneuver performance. For these purposes, the ground support utilities include attitude and rate estimation for validation of the onboard estimates, sensor calibration, inertia tensor calibration, accelerometer bias estimation, center of mass estimation, and production of a definitive attitude history for use by the science teams. Much of the AGS functionality already exists in utilities used at NASA's Goddard Space Flight Center with support heritage from many other missions, but new utilities are being created specifically for the MMS mission, such as for the inertia tensor, accelerometer bias, and center of mass estimation. Algorithms and test results for all the major AGS subsystems are presented here.
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation:[Formula: see text]Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h(-1), were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h(-1)/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h(-1)/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
NASA Astrophysics Data System (ADS)
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
Galactic cosmic ray transport methods and radiation quality issues
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.
1992-01-01
An overview of galactic cosmic ray (GCR) interaction and transport methods, as implemented in the Langley Research Center GCR transport code, is presented. Representative results for solar minimum, exo-magnetospheric GCR dose equivalents in water are presented on a component by component basis for various thicknesses of aluminum shielding. The impact of proposed changes to the currently used quality factors on exposure estimates and shielding requirements are quantified. Using the cellular track model of Katz, estimates of relative biological effectiveness (RBE) for the mixed GCR radiation fields are also made.
Effect of present technology on airship capabilities
NASA Technical Reports Server (NTRS)
Madden, R. T.
1975-01-01
The effect is presented of updating past airship designs using current materials and propulsion systems to determine new airship performance and productivity capabilities. New materials and power plants permit reductions in the empty weights and increases in the useful load capabilities of past airship designs. The increased useful load capability results in increased productivity for a given range, i.e., either increased payload at the same operating speed or increased operating speed for the same payload weight or combinations of both. Estimated investment costs and operating costs are presented to indicate the significant cost parameters in estimating transportation costs of payloads in cents per ton mile. Investment costs are presented considering production lots of 1, 10 and 100 units. Operating costs are presented considering flight speeds and ranges.
On the recovery of electric currents in the liquid core of the Earth
NASA Astrophysics Data System (ADS)
Kuslits, Lukács; Prácser, Ernő; Lemperger, István
2017-04-01
Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
NASA Astrophysics Data System (ADS)
Ohara, Masaki; Noguchi, Toshihiko
This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.
Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Munoz, Cesar A.
2007-01-01
This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.
Estimating psychiatric manpower requirements based on patients' needs.
Faulkner, L R; Goldman, C R
1997-05-01
To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
Was Venus the first habitable world of our solar system?
NASA Astrophysics Data System (ADS)
Way, M. J.; Del Genio, Anthony D.; Kiang, Nancy Y.; Sohl, Linda E.; Grinspoon, David H.; Aleinov, Igor; Kelley, Maxwell; Clune, Thomas
2016-08-01
Present-day Venus is an inhospitable place with surface temperatures approaching 750 K and an atmosphere 90 times as thick as Earth's. Billions of years ago the picture may have been very different. We have created a suite of 3-D climate simulations using topographic data from the Magellan mission, solar spectral irradiance estimates for 2.9 and 0.715 Gya, present-day Venus orbital parameters, an ocean volume consistent with current theory, and an atmospheric composition estimated for early Venus. Using these parameters we find that such a world could have had moderate temperatures if Venus had a prograde rotation period slower than ~16 Earth days, despite an incident solar flux 46-70% higher than Earth receives. At its current rotation period, Venus's climate could have remained habitable until at least 0.715 Gya. These results demonstrate the role rotation and topography play in understanding the climatic history of Venus-like exoplanets discovered in the present epoch.
Was Venus the First Habitable World of our Solar System?
Way, M J; Del Genio, Anthony D; Kiang, Nancy Y; Sohl, Linda E; Grinspoon, David H; Aleinov, Igor; Kelley, Maxwell; Clune, Thomas
2016-08-28
Present-day Venus is an inhospitable place with surface temperatures approaching 750K and an atmosphere 90 times as thick as Earth's. Billions of years ago the picture may have been very different. We have created a suite of 3-D climate simulations using topographic data from the Magellan mission, solar spectral irradiance estimates for 2.9 and 0.715 Gya, present-day Venus orbital parameters, an ocean volume consistent with current theory, and an atmospheric composition estimated for early Venus. Using these parameters we find that such a world could have had moderate temperatures if Venus had a rotation period slower than ~16 Earth days, despite an incident solar flux 46-70% higher than Earth receives. At its current rotation period, Venus's climate could have remained habitable until at least 715 million years ago. These results demonstrate the role rotation and topography play in understanding the climatic history of Venus-like exoplanets discovered in the present epoch.
Was Venus the First Habitable World of our Solar System?
Way, M. J.; Del Genio, Anthony D.; Kiang, Nancy Y.; Sohl, Linda E.; Grinspoon, David H.; Aleinov, Igor; Kelley, Maxwell; Clune, Thomas
2017-01-01
Present-day Venus is an inhospitable place with surface temperatures approaching 750K and an atmosphere 90 times as thick as Earth's. Billions of years ago the picture may have been very different. We have created a suite of 3-D climate simulations using topographic data from the Magellan mission, solar spectral irradiance estimates for 2.9 and 0.715 Gya, present-day Venus orbital parameters, an ocean volume consistent with current theory, and an atmospheric composition estimated for early Venus. Using these parameters we find that such a world could have had moderate temperatures if Venus had a rotation period slower than ~16 Earth days, despite an incident solar flux 46−70% higher than Earth receives. At its current rotation period, Venus’s climate could have remained habitable until at least 715 million years ago. These results demonstrate the role rotation and topography play in understanding the climatic history of Venus-like exoplanets discovered in the present epoch. PMID:28408771
ERIC Educational Resources Information Center
DeNavas-Walt, Carmen; Proctor, Bernadette D.; Smith, Jessica C.
2013-01-01
This report presents data on income, poverty, and health insurance coverage in the United States based on information collected in the 2013 and earlier Current Population Survey Annual Social and Economic Supplements (CPS ASEC) conducted by the U.S. Census Bureau. For most groups, the 2012 income, poverty, and health insurance estimates were not…
NASA Technical Reports Server (NTRS)
Kuhn, Richard E.; Bellavia, David C.; Corsiglia, Victor R.; Wardwell, Douglas A.
1991-01-01
Currently available methods for estimating the net suckdown induced on jet V/STOL aircraft hovering in ground effect are based on a correlation of available force data and are, therefore, limited to configurations similar to those in the data base. Experience with some of these configurations has shown that both the fountain lift and additional suckdown are overestimated but these effects cancel each other for configurations within the data base. For other configurations, these effects may not cancel and the net suckdown could be grossly overestimated or underestimated. Also, present methods do not include the prediction of the pitching moments associated with the suckdown induced in ground effect. An attempt to develop a more logically based method for estimating the fountain lift and suckdown based on the jet-induced pressures is initiated. The analysis is based primarily on the data from a related family of three two-jet configurations (all using the same jet spacing) and limited data from two other two-jet configurations. The current status of the method, which includes expressions for estimating the maximum pressure induced in the fountain regions, and the sizes of the fountain and suckdown regions is presented. Correlating factors are developed to be used with these areas and pressures to estimate the fountain lift, the suckdown, and the related pitching moment increments.
Carpooling : Status and Potential
DOT National Transportation Integrated Search
1975-06-01
The report contains the findings of studies conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented...
NASA Technical Reports Server (NTRS)
Sielken, R. L., Jr. (Principal Investigator)
1981-01-01
Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.
NASA Astrophysics Data System (ADS)
Larkin, Serguey Y.; Anischenko, Serguei E.; Kamyshin, Vladimir A.
1996-12-01
The frequency and power measurements technique using ac Josephson effect is founded on deviation of the voltagecurrent curve of irradiated Josephson junction from its autonomous voltage-current (V-I) curve [1]. Generally this technique, in case of harmonic incident radiation, may be characterized in the following manner: -to measure frequency of the hannonic microwave signal inadiating the Josephson junction and to estimate its intensity using functional processing of the voltage-current curves, one should identify the "Special feature existence" zone on the voltage-current curves. The "Special feature existence" zone results the junction's response to the incident radiation. As this takes place, it is necessary to define the coordinate of a central point of the "Special feature existence" zone on the curve and to estimate the deviation of the V-I curve of irradiated Josephson junction from its autonomous V-I curve. The practical implementation of this technique place at one's disposal a number of algorithms, which enable to realize frequency measurements and intensity estimation with a particular accuracy for incident radiation. This paper presents two rational algorithms to determine the aggregate of their merits and disadvantages and to choose more optimal one.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
NASA Astrophysics Data System (ADS)
Dottori, Marcelo; Castro, Belmiro Mendes
2018-06-01
Data analysis of continental shelf currents and coastal sea level, together with the application of a semi-analytical model, are used to estimate the importance of remote wind forcing on the subinertial variability of the current in the central and northern areas of the South Brazil Bight. Results from both the data analysis and from the semi-analytical model are robust in showing subinertial variability that propagates along-shelf leaving the coast to the left in accordance with theoretical studies of Continental Shelf Waves (CSW). Both the subinertial variability observed in along-shelf currents and sea level oscillations present different propagation speeds for the narrow northern part of the SBB ( 6-7 m/s) and the wide central SBB region ( 11 m/s), those estimates being in agreement with the modeled CSW propagation speed. On the inner and middle shelf, observed along-shelf subinertial currents show higher correlation coefficients with the winds located southward and earlier in time than with the local wind at the current meter mooring position and at the time of measurement. The inclusion of the remote (located southwestward) wind forcing improves the prediction of the subinertial currents when compared to the currents forced only by the local wind, since the along-shelf-modeled currents present correlation coefficients with observed along-shelf currents up to 20% higher on the inner and middle shelf when the remote wind is included. For most of the outer shelf, on the other hand, this is not observed since usually, the correlation between the currents and the synoptic winds is not statistically significant.
NASA Astrophysics Data System (ADS)
Dottori, Marcelo; Castro, Belmiro Mendes
2018-05-01
Data analysis of continental shelf currents and coastal sea level, together with the application of a semi-analytical model, are used to estimate the importance of remote wind forcing on the subinertial variability of the current in the central and northern areas of the South Brazil Bight. Results from both the data analysis and from the semi-analytical model are robust in showing subinertial variability that propagates along-shelf leaving the coast to the left in accordance with theoretical studies of Continental Shelf Waves (CSW). Both the subinertial variability observed in along-shelf currents and sea level oscillations present different propagation speeds for the narrow northern part of the SBB ( 6-7 m/s) and the wide central SBB region ( 11 m/s), those estimates being in agreement with the modeled CSW propagation speed. On the inner and middle shelf, observed along-shelf subinertial currents show higher correlation coefficients with the winds located southward and earlier in time than with the local wind at the current meter mooring position and at the time of measurement. The inclusion of the remote (located southwestward) wind forcing improves the prediction of the subinertial currents when compared to the currents forced only by the local wind, since the along-shelf-modeled currents present correlation coefficients with observed along-shelf currents up to 20% higher on the inner and middle shelf when the remote wind is included. For most of the outer shelf, on the other hand, this is not observed since usually, the correlation between the currents and the synoptic winds is not statistically significant.
Energy-Related Carbon Dioxide Emissions in U.S. Manufacturing
2006-01-01
Based on the Manufacturing Energy Consumption Survey (MECS) conducted by the U.S. Department of Energy, Energy Information Administration (EIA), this paper presents historical energy-related carbon dioxide emission estimates for energy-intensive sub-sectors and 23 industries. Estimates are based on surveys of more than 15,000 manufacturing plants in 1991, 1994, 1998, and 2002. EIA is currently developing its collection of manufacturing data for 2006.
NASA Technical Reports Server (NTRS)
Morris, S. J., Jr.
1979-01-01
Performance estimation, weights, and scaling laws for an eight-blade highly loaded propeller combined with an advanced turboshaft engine are presented. The data are useful for planned aircraft mission studies using the turboprop propulsion system. Comparisons are made between the performance of the 1990+ technology turboprop propulsion system and the performance of both a current technology turbofan and an 1990+ technology turbofan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polgar, T.T.; Ulanowicz, R.E.; Pyne, D.A.
1975-09-01
This report presents in-depth analyses of current meter records obtained from the deployment of continuously recording current meters in the Potomac estuary in 1974. The analyses of transport characteristics are presented in relation to the distribution of striped bass ichthyoplankton in the tidal portion of the Potomac River. The characteristics of ichthyoplankton distributions are described in terms of longitudinal, lateral, and time patterns of abundances. Estimates are made of the production and survival of various ichthyoplankton stages.
Direct estimations of linear and nonlinear functionals of a quantum state.
Ekert, Artur K; Alves, Carolina Moura; Oi, Daniel K L; Horodecki, Michał; Horodecki, Paweł; Kwek, L C
2002-05-27
We present a simple quantum network, based on the controlled-SWAP gate, that can extract certain properties of quantum states without recourse to quantum tomography. It can be used as a basic building block for direct quantum estimations of both linear and nonlinear functionals of any density operator. The network has many potential applications ranging from purity tests and eigenvalue estimations to direct characterization of some properties of quantum channels. Experimental realizations of the proposed network are within the reach of quantum technology that is currently being developed.
GEODYN programmers guide, volume 2, part 1
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Goad, C. C.; Dao, N. C.; Martin, T. V.; Boulware, N. L.; Chin, M. M.
1972-01-01
A guide to the GEODYN Program is presented. The program estimates orbit and geodetic parameters. It possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fit the entire set of estimated parameters. GEODYN consists of 113 different program segments, including the main program, subroutines, functions, and block data routines. All are in G or H level FORTRAN and are currently operational on GSFC's IBM 360/95 and IBM 360/91.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This presentation provides a high-level overview of the current U.S. shared solar landscape, the impact that a given shared solar program's structure has on requiring federal securities oversight, as well as an estimate of market potential for U.S. shared solar deployment.
Centler, Florian; Heße, Falk; Thullner, Martin
2013-09-01
At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.
Studies in High Current Density Ion Sources for Heavy Ion Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon-Golcher, Edwin
This dissertation develops diverse research on small (diameter ~ few mm), high current density (J ~ several tens of mA/cm 2) heavy ion sources. The research has been developed in the context of a programmatic interest within the Heavy Ion Fusion (HIF) Program to explore alternative architectures in the beam injection systems that use the merging of small, bright beams. An ion gun was designed and built for these experiments. Results of average current density yield (
Gingerich, Daniel B; Mauter, Meagan S
2017-09-19
Water treatment processes present intersectoral and cross-media risk trade-offs that are not presently considered in Safe Drinking Water Act regulatory analyses. This paper develops a method for assessing the air emission implications of common municipal water treatment processes used to comply with recently promulgated and proposed regulatory standards, including concentration limits for, lead and copper, disinfection byproducts, chromium(VI), strontium, and PFOA/PFOS. Life-cycle models of electricity and chemical consumption for individual drinking water unit processes are used to estimate embedded NO x , SO 2 , PM 2.5 , and CO 2 emissions on a cubic meter basis. We estimate air emission damages from currently installed treatment processes at U.S. drinking water facilities to be on the order of $500 million USD annually. Fully complying with six promulgated and proposed rules would increase baseline air emission damages by approximately 50%, with three-quarters of these damages originating from chemical manufacturing. Despite the magnitude of these air emission damages, the net benefit of currently implemented rules remains positive. For some proposed rules, however, the promise of net benefits remains contingent on technology choice.
Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi
2017-05-01
Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Optimal wavefront estimation of incoherent sources
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler
2014-08-01
Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.
Doses and risks from the ingestion of Dounreay fuel fragments.
Darley, P J; Charles, M W; Fell, T P; Harrison, J D
2003-01-01
The radiological implications of ingestion of nuclear fuel fragments present in the marine environment around Dounreay have been reassessed by using the Monte Carlo code MCNP to obtain improved estimates of the doses to target cells in the walls of the lower large intestine resulting from the passage of a fragment. The approach takes account of the reduction in dose due to attenuation within the intestinal wall and self-absorption of radiation in the fuel fragment itself. In addition, dose is calculated on the basis of a realistic estimate of the anatomical volume of the lumen, rather than being based on the average mass of the contents, as in the current ICRP model. Our best estimates of doses from the ingestion of the largest Dounreay particles are at least a factor of 30 lower than those predicted using the current ICRP model. The new ICRP model will address the issues raised here and provide improved estimates of dose.
The MSFC Solar Activity Future Estimation (MSAFE) Model
NASA Technical Reports Server (NTRS)
Suggs, Ron
2017-01-01
The Natural Environments Branch of the Engineering Directorate at Marshall Space Flight Center (MSFC) provides solar cycle forecasts for NASA space flight programs and the aerospace community. These forecasts provide future statistical estimates of sunspot number, solar radio 10.7 cm flux (F10.7), and the geomagnetic planetary index, Ap, for input to various space environment models. For example, many thermosphere density computer models used in spacecraft operations, orbital lifetime analysis, and the planning of future spacecraft missions require as inputs the F10.7 and Ap. The solar forecast is updated each month by executing MSAFE using historical and the latest month's observed solar indices to provide estimates for the balance of the current solar cycle. The forecasted solar indices represent the 13-month smoothed values consisting of a best estimate value stated as a 50 percentile value along with approximate +/- 2 sigma values stated as 95 and 5 percentile statistical values. This presentation will give an overview of the MSAFE model and the forecast for the current solar cycle.
A limited universe of membrane protein families and folds
Oberai, Amit; Ihm, Yungok; Kim, Sanguk; Bowie, James U.
2006-01-01
One of the goals of structural genomics is to obtain a structural representative of almost every fold in nature. A recent estimate suggests that 70%–80% of soluble protein domains identified in the first 1000 genome sequences should be covered by about 25,000 structures—a reasonably achievable goal. As no current estimates exist for the number of membrane protein families, however, it is not possible to know whether family coverage is a realistic goal for membrane proteins. Here we find that virtually all polytopic helical membrane protein families are present in the already known sequences so we can make an estimate of the total number of families. We find that only ∼700 polytopic membrane protein families account for 80% of structured residues and ∼1700 cover 90% of structured residues. While apparently a finite and reachable goal, we estimate that it will likely take more than three decades to obtain the structures needed for 90% residue coverage, if current trends continue. PMID:16815920
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patra, Moumita; Maiti, Santanu K., E-mail: santanu.maiti@isical.ac.in
In the present work we investigate the behavior of all three components of persistent spin current in a quasi-periodic Fibonacci ring subjected to Rashba and Dresselhaus spin–orbit interactions. Analogous to persistent charge current in a conducting ring where electrons gain a Berry phase in presence of magnetic flux, spin Berry phase is associated during the motion of electrons in presence of a spin–orbit field which is responsible for the generation of spin current. The interplay between two spin–orbit fields along with quasi-periodic Fibonacci sequence on persistent spin current is described elaborately, and from our analysis, we can estimate the strengthmore » of any one of two spin–orbit couplings together with on-site energy, provided the other is known. - Highlights: • Determination of Rashba and Dresselhaus spin–orbit fields is discussed. • Characteristics of all three components of spin current are explored. • Possibility of estimating on-site energy is given. • Results can be generalized to any lattice models.« less
Vaccaro, John J.
1992-01-01
The sensitivity of groundwater recharge estimates was investigated for the semiarid Ellensburg basin, located on the Columbia Plateau, Washington, to historic and projected climatic regimes. Recharge was estimated for predevelopment and current (1980s) land use conditions using a daily energy-soil-water balance model. A synthetic daily weather generator was used to simulate lengthy sequences with parameters estimated from subsets of the historical record that were unusually wet and unusually dry. Comparison of recharge estimates corresponding to relatively wet and dry periods showed that recharge for predevelopment land use varies considerably within the range of climatic conditions observed in the 87-year historical observation period. Recharge variations for present land use conditions were less sensitive to the same range of historical climatic conditions because of irrigation. The estimated recharge based on the 87-year historical climatology was compared with adjustments to the historical precipitation and temperature records for the same record to reflect CO2-doubling climates as projected by general circulation models (GCMs). Two GCM scenarios were considered: an average of conditions for three different GCMs with CO2 doubling, and a most severe “maximum” case. For the average GCM scenario, predevelopment recharge increased, and current recharge decreased. Also considered was the sensitivity of recharge to the variability of climate within the historical and adjusted historical records. Predevelopment and current recharge were less and more sensitive, respectively, to the climate variability for the average GCM scenario as compared to the variability within the historical record. For the maximum GCM scenario, recharge for both predevelopment and current land use decreased, and the sensitivity to the CO2-related climate change was larger than sensitivity to the variability in the historical and adjusted historical climate records.
Estimates of emergency operating capacity in US manufacturing and nonmanufacturing industries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belzer, D.B.; Serot, D.E.; Kellogg, M.A.
1991-03-01
Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner that allows evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study, conducted by Pacific Northwest Laboratory (PNL), developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key non-manufacturing sectors. This volume presents tabular and graphical results of the historical analysis and projections for each SIC industry. (JF)
NASA Astrophysics Data System (ADS)
Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag
2018-06-01
Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.
NASA Astrophysics Data System (ADS)
Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag
2018-04-01
Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model
NASA Astrophysics Data System (ADS)
Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.
2010-12-01
A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.
The More Things Change...A Status Report on Displaced Homemakers and Single Parents in the 1980s.
ERIC Educational Resources Information Center
Pearce, Diana
This publication presents profiles of displaced homemakers and single parents through analyses from estimates from the Current Population Survey conducted in March 1989. Section 1 on displaced homemakers focuses on three areas. The first part presents a demographic profile of displaced homemakers: their marital status (how they became displaced…
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
2017-11-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate, and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been fully investigated and thus differing PMP estimates are sometimes obtained without physics-based interpretations. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is modified and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The hybrid approach produced consistent historical PMP estimates as the traditional estimates. PMP in the PNW will increase by 50% ± 30% of the current design PMP by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus, long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
The safety of high-hazard water infrastructures in the U.S. Pacific Northwest in a changing climate
NASA Astrophysics Data System (ADS)
Chen, X.; Hossain, F.; Leung, L. R.
2017-12-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified against the traditional estimates. PMP in the PNW will increase by 50%±30% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung
2017-12-22
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
Species coextinctions and the biodiversity crisis.
Koh, Lian Pin; Dunn, Robert R; Sodhi, Navjot S; Colwell, Robert K; Proctor, Heather C; Smith, Vincent S
2004-09-10
To assess the coextinction of species (the loss of a species upon the loss of another), we present a probabilistic model, scaled with empirical data. The model examines the relationship between coextinction levels (proportion of species extinct) of affiliates and their hosts across a wide range of coevolved interspecific systems: pollinating Ficus wasps and Ficus, parasites and their hosts, butterflies and their larval host plants, and ant butterflies and their host ants. Applying a nomographic method based on mean host specificity (number of host species per affiliate species), we estimate that 6300 affiliate species are "coendangered" with host species currently listed as endangered. Current extinction estimates need to be recalibrated by taking species coextinctions into account.
Hoffmann, John P.; Blasch, Kyle W.; Pool, Don R.; Bailey, Matthew A.; Callegary, James B.; Stonestrom, David A.; Constantz, Jim; Ferré, Ty P.A.; Leake, Stanley A.
2007-01-01
A large fraction of ground water stored in the alluvial aquifers in the Southwest is recharged by water that percolates through ephemeral stream-channel deposits. The amount of water currently recharging many of these aquifers is insufficient to meet current and future demands. Improving the understanding of streambed infiltration and the subsequent redistribution of water within the unsaturated zone is fundamental to quantifying and forming an accurate description of streambed recharge. In addition, improved estimates of recharge from ephemeral-stream channels will reduce uncertainties in water-budget components used in current ground-water models.This chapter presents a summary of findings related to a focused recharge investigation along Rillito Creek in Tucson, Arizona. A variety of approaches used to estimate infiltration, percolation, and recharge fluxes are presented that provide a wide range of temporal- and spatial-scale measurements of recharge beneath Rillito Creek. The approaches discussed include analyses of (1) cores and cuttings for hydraulic and textural properties, (2) environmental tracers from the water extracted from the cores and cuttings, (3) seepage measurements made during sustained streamflow, (4) heat as a tracer and numerical simulations of the movement of heat through the streambed sediments, (5) water-content variations, (6) water-level responses to streamflow in piezometers within the stream channel, and (7) gravity changes in response to recharge events. Hydraulic properties of the materials underlying Rillito Creek were used to estimate long-term potential recharge rates. Seepage measurements and analyses of temperature and water content were used to estimate infiltration rates, and environmental tracers were used to estimate percolation rates through the thick unsaturated zone. The presence or lack of tritium in the water was used to determine whether or not water in the unsaturated zone infiltrated within the past 40 years. Analysis of water-level and temporal-gravity data were used to estimate recharge volumes. Data presented in this chapter were collected from 1999 though 2002. Precipitation and streamflow during this period were less than the long-term average; however, two periods of significant streamflow resulted in recharge—one in the summer of 1999 and the other in the fall/winter of 2000.Flux estimates of infiltration and recharge vary from less than 0.1 to 1.0 cubic meter per second per kilometer of streamflow. Recharge-flux estimates are larger than infiltration estimates. Larger recharge fluxes than infiltration fluxes are explained by the scale of measurements. Methods used to estimate recharge rates incorporate the largest volumetric and temporal scales and are likely to have fluxes from other nearby sources, such as unmeasured tributaries, whereas the methods used to estimate infiltration incorporate the smallest scales, reflecting infiltration rates at individual measurement sites.
NASA Astrophysics Data System (ADS)
Gonzi, Siegfried; Palmer, Paul; O'Doherty, Simon; Young, Dickon; Stanley, Kieran; Stavert, Ann; Grant, Aoife; Helfter, Carole; Mullinger, Neil; Nemitz, Eiko; Allen, Grant; Pitt, Joseph; Le Breton, Michael; Bösch, Hartmut; Sembhi, Harjinder; Sonderfeld, Hannah; Parker, Robert; Bauguitte, Stephane
2016-04-01
Robust quantification of emissions of greenhouse gases (GHG) is central to the success of ongoing international efforts to slow current emissions and mitigate future climate change. The Greenhouse gAs Uk and Global Emissions (GAUGE) project aims to quantify the magnitude and uncertainty of country-scale emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) using concentration measurements from a network of tall towers and mobile platforms (aircraft and ferry) distributed across the UK. The GAUGE measurement programme includes: (a) GHG measurements on a regular ferry route down the North Sea aimed at sampling UK outflow; (b) campaign deployment of the UK BAe-146 research aircraft to provide vertical profile measurements of GHG over and around the UK; (c) a high-density GHG measurement network over East Anglia that is primarily focused on the agricultural sector; and (d) regular measurements of CO2 and CH4 isotopologues used for GHG source attribution. We also use satellite observations from the Japanese Greenhouse gases Observing SATellite (GOSAT) to provide continental-scale constraints on GHG flux estimates. We present CO2 flux estimates for the UK inferred from GAUGE measurements using a nested, high-resolution (25 km) version of the GEOS-Chem global atmospheric chemistry and transport model and an ensemble Kalman filter. We will present our current best estimate for CO2 fluxes and a preliminary assessment of the efficacy of individual GAUGE data sources to spatially resolve CO2 flux estimates over the UK. We will also discuss how flux estimates inferred from the different models used within GAUGE can help to assess the role of transport model error and to determine an ensemble CO2 flux estimate for the UK.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Current distribution in tissues with conducted electrical weapons operated in drive-stun mode.
Panescu, Dorin; Kroll, Mark W; Brave, Michael
2016-08-01
The TASER® conducted electrical weapon (CEW) is best known for delivering electrical pulses that can temporarily incapacitate subjects by overriding normal motor control. The alternative drive-stun mode is less understood and the goal of this paper is to analyze the distribution of currents in tissues when the CEW is operated in this mode. Finite element modeling (FEM) was used to approximate current density in tissues with boundary electrical sources placed 40 mm apart. This separation was equivalent to the distance between drive-stun mode TASER X26™, X26P, X2 CEW electrodes located on the device itself and between those located on the expended CEW cartridge. The FEMs estimated the amount of current flowing through various body tissues located underneath the electrodes. The FEM simulated the attenuating effects of both a thin and of a normal layer of fat. The resulting current density distributions were used to compute the residual amount of current flowing through deeper layers of tissue. Numerical modeling estimated that the skin, fat and skeletal muscle layers passed at least 86% or 91% of total CEW current, assuming a thin or normal fat layer thickness, respectively. The current density and electric field strength only exceeded thresholds which have increased probability for ventricular fibrillation (VFTJ), or for cardiac capture (CCTE), in the skin and the subdermal fat layers. The fat layer provided significant attenuation of drive-stun CEW currents. Beyond the skeletal muscle layer, only fractional amounts of the total CEW current were estimated to flow. The regions presenting risk for VF induction or for cardiac capture were well away from the typical heart depth.
NASA Technical Reports Server (NTRS)
Morris, S. J., Jr.
1980-01-01
Performance estimations, weights, and scaling laws for the six blade and ten blade highly loaded propellers combined with an advanced turboshaft engine are presented. These data are useful for aircraft mission studies using the turboprop system. Comparisons are made between the performance of post 1980 technology turboprop propulsion systems and the performance of both a current technology turbofan and a post 1990 technology turbofan.
Black carbon emissions in Russia: A critical review
NASA Astrophysics Data System (ADS)
Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; Denysenko, Artur; Smith, Steven J.; Staniszewski, Aaron; Hao, Wei Min; Liu, Liang; Bond, Tami C.
2017-08-01
This study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.
Black carbon emissions in Russia: A critical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa
Russia has a particularly important role regarding black carbon (BC) emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. This study presents a comprehensive review of BC estimates from a range of studies. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russian associated petroleummore » gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 689 Gg in 2014, with an uncertainty range between (407-1,416), while OC emissions are 9,228 Gg (with uncertainty between 5,595 and 14,728). Wildfires dominated and contributed about 83% of the total BC emissions, however the effect on radiative forcing is mitigated by OC emissions. We also present an adjusted estimate of Arctic forcing from Russian OC and BC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less
Exercise barriers and preferences among women and men with multiple sclerosis.
Asano, Miho; Duquette, Pierre; Andersen, Ross; Lapierre, Yves; Mayo, Nancy E
2013-03-01
The primary objective of this study was to estimate the extent to which women and men with MS present different exercise barriers. The secondary objective was to estimate the extent to which women and men with MS present different perceived-health, depressive symptoms, and current exercise routines or preferences. This was a cross sectional survey. 417 people with MS completed a survey of exercise barriers and current exercise routines, perceived-health and depressive symptoms. The top three exercise barriers were: too tired; impairment; and lack of time, regardless of their gender. Regardless of their gender, three times/week and 60 min/session was identified as the most common current exercise structure among physically active participants. The top three currently preferred exercise by men included walking, strengthening/weights and flexibility/stretch exercise. Women reported the same three exercises but flexibility/stretch exercise were slightly more popular than other exercise. Similarities in perceived health status and depressive symptoms were seen between women and men; expect more men were diagnosed with progressive MS (20% higher) than women, leading to a higher rate of men reporting problems with mobility. Women and men with MS differed very little on exercise barriers and current exercise routines, perceived health and depressive symptoms. Even though MS is generally considered a woman's disease, this study did not find a strong need to develop gender specific exercise or physical activity interventions for this population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.; Gonzalez, Esteban
Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less
Americans misperceive racial economic equality
Kraus, Michael W.; Rucker, Julian M.; Richeson, Jennifer A.
2017-01-01
The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies (n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black–White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites’ estimates of Black–White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality—a misperception that is likely to have any number of important policy implications. PMID:28923915
Americans misperceive racial economic equality.
Kraus, Michael W; Rucker, Julian M; Richeson, Jennifer A
2017-09-26
The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies ( n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black-White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites' estimates of Black-White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality-a misperception that is likely to have any number of important policy implications.
Ruttenber, A J; McCrea, J S; Wade, T D; Schonbeck, M F; LaMontagne, A D; Van Dyke, M V; Martyny, J W
2001-02-01
We outline methods for integrating epidemiologic and industrial hygiene data systems for the purpose of exposure estimation, exposure surveillance, worker notification, and occupational medicine practice. We present examples of these methods from our work at the Rocky Flats Plant--a former nuclear weapons facility that fabricated plutonium triggers for nuclear weapons and is now being decontaminated and decommissioned. The weapons production processes exposed workers to plutonium, gamma photons, neutrons, beryllium, asbestos, and several hazardous chemical agents, including chlorinated hydrocarbons and heavy metals. We developed a job exposure matrix (JEM) for estimating exposures to 10 chemical agents in 20 buildings for 120 different job categories over a production history spanning 34 years. With the JEM, we estimated lifetime chemical exposures for about 12,000 of the 16,000 former production workers. We show how the JEM database is used to estimate cumulative exposures over different time periods for epidemiological studies and to provide notification and determine eligibility for a medical screening program developed for former workers. We designed an industrial hygiene data system for maintaining exposure data for current cleanup workers. We describe how this system can be used for exposure surveillance and linked with the JEM and databases on radiation doses to develop lifetime exposure histories and to determine appropriate medical monitoring tests for current cleanup workers. We also present time-line-based graphical methods for reviewing and correcting exposure estimates and reporting them to individual workers.
Richardson, Claire; Rutherford, Shannon; Agranovski, Igor
2018-06-01
Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.
Comment: Characterization of Two Historic Smallpox Specimens from a Czech Museum.
Porter, Ashleigh F; Duggan, Ana T; Poinar, Hendrik N; Holmes, Edward C
2017-09-28
The complete genome sequences of two strains of variola virus (VARV) sampled from human smallpox specimens present in the Czech National Museum, Prague, were recently determined, with one of the sequences estimated to date to the mid-19th century. Using molecular clock methods, the authors of this study go on to infer that the currently available strains of VARV share an older common ancestor, at around 1350 AD, than some recent estimates based on other archival human samples. Herein, we show that the two Czech strains exhibit anomalous branch lengths given their proposed age, and by assuming a constant rate of evolutionary change across the rest of the VARV phylogeny estimate that their true age in fact lies between 1918 and 1937. We therefore suggest that the age of the common ancestor of currently available VARV genomes most likely dates to late 16th and early 17th centuries and not ~1350 AD.
MNE software for processing MEG and EEG data
Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808
Comment: Characterization of Two Historic Smallpox Specimens from a Czech Museum
Porter, Ashleigh F.; Duggan, Ana T.
2017-01-01
The complete genome sequences of two strains of variola virus (VARV) sampled from human smallpox specimens present in the Czech National Museum, Prague, were recently determined, with one of the sequences estimated to date to the mid-19th century. Using molecular clock methods, the authors of this study go on to infer that the currently available strains of VARV share an older common ancestor, at around 1350 AD, than some recent estimates based on other archival human samples. Herein, we show that the two Czech strains exhibit anomalous branch lengths given their proposed age, and by assuming a constant rate of evolutionary change across the rest of the VARV phylogeny estimate that their true age in fact lies between 1918 and 1937. We therefore suggest that the age of the common ancestor of currently available VARV genomes most likely dates to late 16th and early 17th centuries and not ~1350 AD. PMID:28956829
Deducing the Milky Way's Massive Cluster Population
NASA Astrophysics Data System (ADS)
Hanson, M. M.; Popescu, B.; Larsen, S. S.; Ivanov, V. D.
2010-11-01
Recent near-infrared surveys of the galactic plane have been used to identify new massive cluster candidates. Follow up study indicates about half are not true, gravitationally-bound clusters. These false positives are created by high density fields of unassociated stars, often due to a sight-line of reduced extinction. What is not so easy to estimate is the number of false negatives, clusters which exist but are not currently being detected by our surveys. In order to derive critical characteristics of the Milky Way's massive cluster population, such as cluster mass function and cluster lifetimes, one must be able to estimate the characteristics of these false negatives. Our group has taken on the daunting task of attempting such an estimate by first creating the stellar cluster imaging simulation program, MASSCLEAN. I will present our preliminary models and methods for deriving the biases of current searches.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Numerical simulations of stripping effects in high-intensity hydrogen ion linacs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carneiro, J.-P.; /Fermilab; Mustapha, B.
2008-12-01
Numerical simulations of H{sup -} stripping losses from blackbody radiation, electromagnetic fields, and residual gas have been implemented into the beam dynamics code TRACK. Estimates of the stripping losses along two high-intensity H{sup -} linacs are presented: the Spallation Neutron Source linac currently being operated at Oak Ridge National Laboratory and an 8 GeV superconducting linac currently being designed at Fermi National Accelerator Laboratory.
Rosauer, Dan F; Catullo, Renee A; VanDerWal, Jeremy; Moussalli, Adnan; Moritz, Craig
2015-01-01
Areas of suitable habitat for species and communities have arisen, shifted, and disappeared with Pleistocene climate cycles, and through this shifting landscape, current biodiversity has found paths to the present. Evolutionary refugia, areas of relative habitat stability in this shifting landscape, support persistence of lineages through time, and are thus crucial to the accumulation and maintenance of biodiversity. Areas of endemism are indicative of refugial areas where diversity has persisted, and endemism of intraspecific lineages in particular is strongly associated with late-Pleistocene habitat stability. However, it remains a challenge to consistently estimate the geographic ranges of intraspecific lineages and thus infer phylogeographic endemism, because spatial sampling for genetic analyses is typically sparse relative to species records. We present a novel technique to model the geographic distribution of intraspecific lineages, which is informed by the ecological niche of a species and known locations of its constituent lineages. Our approach allows for the effects of isolation by unsuitable habitat, and captures uncertainty in the extent of lineage ranges. Applying this method to the arc of rainforest areas spanning 3500 km in eastern Australia, we estimated lineage endemism for 53 species of rainforest dependent herpetofauna with available phylogeographic data. We related endemism to the stability of rainforest habitat over the past 120,000 years and identified distinct concentrations of lineage endemism that can be considered putative refugia. These areas of lineage endemism are strongly related to historical stability of rainforest habitat, after controlling for the effects of current environment. In fact, a dynamic stability model that allows movement to track suitable habitat over time was the most important factor in explaining current patterns of endemism. The techniques presented here provide an objective, practical method for estimating geographic ranges below the species level, and including them in spatial analyses of biodiversity.
Rosauer, Dan F.; Catullo, Renee A.; VanDerWal, Jeremy; Moussalli, Adnan; Moritz, Craig
2015-01-01
Areas of suitable habitat for species and communities have arisen, shifted, and disappeared with Pleistocene climate cycles, and through this shifting landscape, current biodiversity has found paths to the present. Evolutionary refugia, areas of relative habitat stability in this shifting landscape, support persistence of lineages through time, and are thus crucial to the accumulation and maintenance of biodiversity. Areas of endemism are indicative of refugial areas where diversity has persisted, and endemism of intraspecific lineages in particular is strongly associated with late-Pleistocene habitat stability. However, it remains a challenge to consistently estimate the geographic ranges of intraspecific lineages and thus infer phylogeographic endemism, because spatial sampling for genetic analyses is typically sparse relative to species records. We present a novel technique to model the geographic distribution of intraspecific lineages, which is informed by the ecological niche of a species and known locations of its constituent lineages. Our approach allows for the effects of isolation by unsuitable habitat, and captures uncertainty in the extent of lineage ranges. Applying this method to the arc of rainforest areas spanning 3500 km in eastern Australia, we estimated lineage endemism for 53 species of rainforest dependent herpetofauna with available phylogeographic data. We related endemism to the stability of rainforest habitat over the past 120,000 years and identified distinct concentrations of lineage endemism that can be considered putative refugia. These areas of lineage endemism are strongly related to historical stability of rainforest habitat, after controlling for the effects of current environment. In fact, a dynamic stability model that allows movement to track suitable habitat over time was the most important factor in explaining current patterns of endemism. The techniques presented here provide an objective, practical method for estimating geographic ranges below the species level, and including them in spatial analyses of biodiversity. PMID:26020936
THE NEXT GENERATION OF VMT REDUCTION PROGRAMS
This research is structured to provide a clear delineation of factors that influence trip chaining, identify levels of flexibility in commuter travel, present a market segmentation of commuters in terms of their flexibility levels, and estimate the reach of current programs. ...
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Current and future levels of mercury atmospheric pollution on a global scale
NASA Astrophysics Data System (ADS)
Pacyna, Jozef M.; Travnikov, Oleg; De Simone, Francesco; Hedgecock, Ian M.; Sundseth, Kyrre; Pacyna, Elisabeth G.; Steenhuisen, Frits; Pirrone, Nicola; Munthe, John; Kindbom, Karin
2016-10-01
An assessment of current and future emissions, air concentrations, and atmospheric deposition of mercury worldwide is presented on the basis of results obtained during the performance of the EU GMOS (Global Mercury Observation System) project. Emission estimates for mercury were prepared with the main goal of applying them in models to assess current (2013) and future (2035) air concentrations and atmospheric deposition of this contaminant. The combustion of fossil fuels (mainly coal) for energy and heat production in power plants and in industrial and residential boilers, as well as artisanal and small-scale gold mining, is one of the major anthropogenic sources of Hg emissions to the atmosphere at present. These sources account for about 37 and 25 % of the total anthropogenic Hg emissions globally, estimated to be about 2000 t. Emissions in Asian countries, particularly in China and India, dominate the total emissions of Hg. The current estimates of mercury emissions from natural processes (primary mercury emissions and re-emissions), including mercury depletion events, were estimated to be 5207 t year-1, which represents nearly 70 % of the global mercury emission budget. Oceans are the most important sources (36 %), followed by biomass burning (9 %). A comparison of the 2035 anthropogenic emissions estimated for three different scenarios with current anthropogenic emissions indicates a reduction of these emissions in 2035 up to 85 % for the best-case scenario. Two global chemical transport models (GLEMOS and ECHMERIT) have been used for the evaluation of future mercury pollution levels considering future emission scenarios. Projections of future changes in mercury deposition on a global scale simulated by these models for three anthropogenic emissions scenarios of 2035 indicate a decrease in up to 50 % deposition in the Northern Hemisphere and up to 35 % in Southern Hemisphere for the best-case scenario. The EU GMOS project has proved to be a very important research instrument for supporting the scientific justification for the Minamata Convention and monitoring of the implementation of targets of this convention, as well as the EU Mercury Strategy. This project provided the state of the art with regard to the development of the latest emission inventories for mercury, future emission scenarios, dispersion modelling of atmospheric mercury on a global and regional scale, and source-receptor techniques for mercury emission apportionment on a global scale.
Historical habitat connectivity affects current genetic structure in a grassland species.
Münzbergová, Z; Cousins, S A O; Herben, T; Plačková, I; Mildén, M; Ehrlén, J
2013-01-01
Many recent studies have explored the effects of present and past landscape structure on species distribution and diversity. However, we know little about the effects of past landscape structure on distribution of genetic diversity within and between populations of a single species. Here we describe the relationship between present and past landscape structure (landscape connectivity and habitat size estimated from historical maps) and current genetic structure in a perennial herb, Succisa pratensis. We used allozymes as co-dominant markers to estimate genetic diversity and deviation from Hardy-Weinberg equilibrium in 31 populations distributed within a 5 km(2) agricultural landscape. The results showed that current genetic diversity of populations was related to habitat suitability, habitat age, habitat size and habitat connectivity in the past. The effects of habitat age and past connectivity on genetic diversity were in most cases also significant after taking the current landscape structure into account. Moreover, current genetic similarity between populations was affected by past connectivity after accounting for current landscape structure. In both cases, the oldest time layer (1850) was the most informative. Most populations showed heterozygote excess, indicating disequilibrium due to recent gene flow or selection against homozygotes. These results suggest that habitat age and past connectivity are important determinants of distribution of genetic diversity between populations at a scale of a few kilometres. Landscape history may significantly contribute to our understanding of distribution of current genetic structure within species and the genetic structure may be used to better understand landscape history, even at a small scale. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.
Radiation environment and shielding for early manned Mars missions
NASA Technical Reports Server (NTRS)
Hall, Stephen B.; Mccann, Michael E.
1986-01-01
The problem of shielding a crew during early manned Mars missions is discussed. Requirements for shielding are presented in the context of current astronaut exposure limits, natural ionizing radiation sources, and shielding inherent in a particular Mars vehicle configuration. An estimated range for shielding weight is presented based on the worst solar flare dose, mission duration, and inherent vehicle shielding.
NASA Astrophysics Data System (ADS)
Suzuki, Yuki; Fung, George S. K.; Shen, Zeyang; Otake, Yoshito; Lee, Okkyun; Ciuffo, Luisa; Ashikaga, Hiroshi; Sato, Yoshinobu; Taguchi, Katsuyuki
2017-03-01
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
An estimation of Canadian population exposure to cosmic rays from air travel.
Chen, Jing; Newton, Dustin
2013-03-01
Based on air travel statistics in 1984, it was estimated that less than 4 % of the population dose from cosmic ray exposure would result from air travel. In the present study, cosmic ray doses were calculated for more than 3,000 flights departing from more than 200 Canadian airports using actual flight profiles. Based on currently available air travel statistics, the annual per capita effective dose from air transportation is estimated to be 32 μSv for Canadians, about 10 % of the average cosmic ray dose received at ground level (310 μSv per year).
Quantifying short-lived events in multistate ionic current measurements.
Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute
2014-02-25
We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.
An In-Rush Current Suppression Technique for the Solid-State Transfer Switch System
NASA Astrophysics Data System (ADS)
Cheng, Po-Tai; Chen, Yu-Hsing
More and more utility companies provide dual power feeders as a premier service of high power quality and reliability. To take advantage of this, the solid-state transfer switch (STS) is adopted to protect the sensitive load against the voltage sag. However, the fast transfer process may cause in-rush current on the load-side transformer due to the resulting DC-offset in its magnetic flux as the load-transfer is completed. The in-rush current can reach 2∼6 p.u. and it may trigger the over-current protections on the power feeder. This paper develops a flux estimation scheme and a thyristor gating scheme based on the impulse commutation bridge STS (ICBSTS) to minimize the DC-offset on the magnetic flux. By sensing the line voltages of both feeders, the flux estimator can predict the peak transient flux linkage at the moment of load-transfer and evaluate a suitable moment for the transfer to minimize the in-rush current. Laboratory test results are presented to validate the performance of the proposed system.
Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1995-01-01
Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.
Extended active disturbance rejection controller
NASA Technical Reports Server (NTRS)
Tian, Gang (Inventor); Gao, Zhiqiang (Inventor)
2012-01-01
Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.
Extended Active Disturbance Rejection Controller
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Tian, Gang (Inventor)
2016-01-01
Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.
Extended Active Disturbance Rejection Controller
NASA Technical Reports Server (NTRS)
Tian, Gang (Inventor); Gao, Zhiqiang (Inventor)
2014-01-01
Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.
Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.
Kopka, Ryszard
2017-12-22
In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.
Faizullah, Faiz
2016-01-01
The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.
The Everglades Depth Estimation Network (EDEN) for Support of Ecological and Biological Assessments
Telis, Pamela A.
2006-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (1999-present), online water-depth information for the entire freshwater portion of the Greater Everglades. Presented on a 400-square-meter grid spacing, EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan.
Andersson, K G; Mikkelsen, T; Astrup, P; Thykier-Nielsen, S; Jacobsen, L H; Hoe, S C; Nielsen, S P
2009-12-01
The ARGOS decision support system is currently being extended to enable estimation of the consequences of terror attacks involving chemical, biological, nuclear and radiological substances. This paper presents elements of the framework that will be applied in ARGOS to calculate the dose contributions from contaminants dispersed in the atmosphere after a 'dirty bomb' explosion. Conceptual methodologies are presented which describe the various dose components on the basis of knowledge of time-integrated contaminant air concentrations. Also the aerosolisation and atmospheric dispersion in a city of different types of conceivable contaminants from a 'dirty bomb' are discussed.
Options for Reducing the Deficit: 2017 to 2026
2016-12-01
September 30 and are designated by the calendar year in which they end. The numbers in the text and tables are in nominal (current year) dollars...change in policy is plausible, CBO’s estimates are designed to fall in the middle of the distribution of possible outcomes. The estimates presented...macroeconomic effects, of 0.25 percent of GDP in any year over the next 10 years, or having been designated as such by the Chair of either Budget
Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors
Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar
2015-01-01
This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289
Corey, Catherine G; Chang, Joanne T; Rostron, Brian L
2018-03-05
Currently, an estimated 7.9 million US adults use electronic nicotine delivery systems (ENDS). Although published reports have identified fires and explosions related to use of ENDS since 2009, these reports do not provide national estimates of burn injuries associated with ENDS batteries in the US. We analyzed nationally representative data provided in the National Electronic Injury Surveillance System (NEISS) to estimate the number of US emergency department (ED) visits for burn injuries associated with ENDS batteries. We reviewed the case narrative field to gain additional insights into the circumstances of the burn injury. In 2016, 26 ENDS battery-related burn cases were captured by NEISS, which translates to a national estimate of 1007 (95%CI: 357-1657) injuries presenting in US EDs. Most of the burns were thermal burns (80.4%) and occurred to the upper leg/lower trunk (77.3%). Examination of the case narrative field indicated that at least 20 of the burn injuries occurred while ENDS batteries were in the user's pocket. Our study provides valuable information for understanding the current burden of ENDS battery-related burn injuries treated in US EDs. The nature and circumstances of the injuries suggest these incidents were unintentional and would potentially be prevented through battery design requirements, battery testing standards and public education related to ENDS battery safety.
Analysis of the potential benefits of larger trucks for U.S. businesses operating private fleets.
DOT National Transportation Integrated Search
2009-05-01
This study examines the current operational and economic performance of a sample of companies that operate private fleets and establishes a present-day baseline of transport productivity and efficiency. It also estimates how transportation performanc...
NASA Astrophysics Data System (ADS)
Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar
2016-12-01
This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.
McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F
2017-08-01
The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating patient organ doses, Monte Carlo simulations were performed by creating voxelized models of each patient, identifying key organs and incorporating tube current values into the simulations to estimate dose to the lungs and breasts (females only) for chest scans and the liver, kidney, and spleen for abdomen/pelvis scans. Organ doses from simulations using the actual tube current values were compared to those using each of the estimated tube current values (actual-topo and sim-topo). When compared to the actual tube current values, the average error for tube current values estimated from the actual topogram (actual-topo) and simulated topogram (sim-topo) was 3.9% and 5.8% respectively. For Monte Carlo simulations of chest CT exams using the actual tube current values and estimated tube current values (based on the actual-topo and sim-topo methods), the average differences for lung and breast doses ranged from 3.4% to 6.6%. For abdomen/pelvis exams, the average differences for liver, kidney, and spleen doses ranged from 4.2% to 5.3%. Strong agreement between organ doses estimated using actual and estimated tube current values provides validation of both methods for estimating tube current values based on data provided in the topogram or simulated from image data. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
A novel measure of effect size for mediation analysis.
Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken
2018-06-01
Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A Spatial Method to Calculate Small-Scale Fisheries Extent
NASA Astrophysics Data System (ADS)
Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.
2016-02-01
Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.
Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng
2015-01-01
To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061
Revisiting Boundary Perturbation Theory for Inhomogeneous Transport Problems
Favorite, Jeffrey A.; Gonzalez, Esteban
2017-03-10
Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less
Global anthropogenic methane emissions 2005-2030: technical mitigation potentials and costs
NASA Astrophysics Data System (ADS)
Höglund-Isaksson, L.
2012-10-01
This paper presents estimates of current and future global anthropogenic methane emissions, their technical mitigation potential and associated costs for the period 2005 to 2030. The analysis uses the GAINS model framework to estimate emissions, mitigation potentials and costs for all major sources of anthropogenic methane for 83 countries/regions, which are aggregated to produce global estimates. Global emissions are estimated at 323 Mt methane in 2005, with an expected increase to 414 Mt methane in 2030. The technical mitigation potential is estimated at 195 Mt methane in 2030, whereof about 80 percent is found attainable at a marginal cost less than 20 Euro t-1 CO2eq when using a social planner cost perspective. With a private investor cost perspective, the corresponding fraction is only 30 percent. Major uncertainty sources in emission estimates are identified and discussed.
Attention as an effect not a cause
Krauzlis, Richard J.; Bollimunta, Anil; Arcizet, Fabrice; Wang, Lupeng
2014-01-01
Attention is commonly thought to be important for managing the limited resources available in sensory areas of neocortex. Here we present an alternative view that attention arises as a byproduct of circuits centered on the basal ganglia involved in value-based decision-making. The central idea is that decision-making depends on properly estimating the current state of the animal and its environment, and that the weighted inputs to the currently prevailing estimate give rise to the filter-like properties of attention. After outlining this new framework, we describe findings from physiology, anatomy, computational and clinical work that support this point of view. We conclude that the brain mechanisms responsible for attention employ a conserved circuit motif that predates the emergence of the neocortex. PMID:24953964
Estimating Power System Dynamic States Using Extended Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw
2014-10-31
Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Intelligent voltage control strategy for three-phase UPS inverters with output LC filter
NASA Astrophysics Data System (ADS)
Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.
2015-08-01
This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.
Spatial and Temporal Influences on Carbon Storage in Hydric Soils of the Conterminous United States
NASA Astrophysics Data System (ADS)
Sundquist, E. T.; Ackerman, K.; Bliss, N.; Griffin, R.; Waltman, S.; Windham-Myers, L.
2016-12-01
Defined features of hydric soils persist over extensive areas of the conterminous United States (CUS) long after their hydric formation conditions have been altered by historical changes in land and water management. These legacy hydric features may represent previous wetland environments in which soil carbon storage was significantly higher before the influence of human activities. We hypothesize that historical alterations of hydric soil carbon storage can be approximated using carefully selected estimates of carbon storage in currently identified hydric soils. Using the Soil Survey Geographic (SSURGO) database, we evaluate carbon storage in identified hydric soil components that are subject to discrete ranges of current or recent conditions of flooding, ponding, and other indicators of hydric and non-hydric soil associations. We check our evaluations and, where necessary, adjust them using independently published soil data. We compare estimates of soil carbon storage under various hydric and non-hydric conditions within proximal landscapes and similar biophysical settings and ecosystems. By combining these setting- and ecosystem-constrained comparisons with the spatial distribution and attributes of wetlands in the National Wetlands Inventory, we impute carbon storage estimates for soils that occur in current wetlands and for hydric soils that are not associated with current wetlands. Using historical data on land use and water control structures, we map the spatial and temporal distribution of past changes in land and water management that have affected hydric soils. We combine these maps with our imputed carbon storage estimates to calculate ranges of values for historical and present-day carbon storage in hydric soils throughout the CUS. These estimates may provide useful constraints for projections of potential carbon storage in hydric soils under future conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Gomez, P.; Harding, S. F.; Richmond, M. C.
2017-01-01
Standards provide recommendations for best practices when installing current meters to measure fluid flow in closed conduits. A central guideline requires the velocity distribution to be regular and the flow steady. Because of the nature of the short converging intakes typical of low-head hydroturbines, these assumptions may be invalid if current meters are intended to be used to estimate discharge. Usual concerns are (1) the effects of the number of devices, (2) the sampling location and (3) the high turbulence caused by blockage from submersible traveling screens usually deployed for safe downstream fish passage. These three effects were examined inmore » the present study by using 3D simulated flow fields in both steady-state and transient modes. In the process of describing an application at an existing hydroturbine intake at Ice Harbor Dam, the present work outlined the methods involved, which combined computational fluid dynamics, laboratory measurements in physical models of the hydroturbine, and current meter performance evaluations in experimental settings. The main conclusions in this specific application were that a steady-state flow field sufficed to determine the adequate number of meters and their location, and that both the transverse velocity and turbulence intensity had a small impact on estimate errors. However, while it may not be possible to extrapolate these findings to other field conditions and measuring devices, the study laid out a path to conduct similar assessments in other applications.« less
Kjeldsen, Henrik D.; Kaiser, Marcus; Whittington, Miles A.
2015-01-01
Background Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. New method Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. Results The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. Comparison with existing methods The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Conclusions Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. PMID:26026581
Kjeldsen, Henrik D; Kaiser, Marcus; Whittington, Miles A
2015-09-30
Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Black carbon emissions in Russia: A critical review
Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; ...
2017-05-18
Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less
Black carbon emissions in Russia: A critical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa
Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less
Learning and Teaching Measurement (2003 Yearbook)
ERIC Educational Resources Information Center
Clements, Douglas H., Ed.
2003-01-01
Measurement can develop in the earliest years from children's experience, and it readily lends itself to real-world application. Focusing on research and practice, NCTM's 2003 Yearbook presents current thinking about the learning and teaching of measurement, including students' understanding, the mathematics of measurement, estimation and…
R package to estimate intracluster correlation coefficient with confidence interval for binary data.
Chakraborty, Hrishikesh; Hossain, Akhtar
2018-03-01
The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, L; Lambert, C; Nyiri, B
Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less
NASA Astrophysics Data System (ADS)
Bosse, Anthony; Testor, Pierre; Mortier, Laurent; Beguery, Laurent; Bernardet, Karim; Taillandier, Vincent; d'Ortenzio, Fabrizio; Prieur, Louis; Coppola, Laurent; Bourrin, François
2013-04-01
In the last 5 years, an unprecedented effort in the sampling of the Northern Current (NC) has been carried out using gliders which collected more than 50 000 profiles down to 1000m maximum along a few repeated sections perpendicular to the French coast. Based on this dataset, this study presents a very first quantitative picture of the NC on 0-1000m depth. We show its mean structure of temperature and salinity characterized by the different Water Masses of the basin (Atlantic Water, Winter Intermediate Water, Levantine Intermediate Water and Western Mediterranean Deep Water) for each season and at different location. Geostrophic currents are derived from the integration of the thermal-wind balance using the mean glider-estimate of the current during each dive as a reference. Estimates of the heat, salt, and volume transport are then computed in order to draw an heat and salt budget of the NC. The results show a strong seasonal variability due to the intense surface buoyancy loss in winter resulting in a vertical mixing offshore that makes the mixed layer depth reaching several hundreds of meters in the whole basin and in a very particular area down to the bottom of the sea-floor (deep convection area). The horizontal density gradient intensifies in winter leading to geostrophic currents that are more intense and more confined to the continental slope, and thus to the enhancement of the mesoscale activity (meandering, formation of eddies through baroclinic instability...). The mean transport estimates of the NC is found to be about 2-3Sv greater than previous spurious estimates. The heat budget of the NC also provides an estimate of the mean across shore heat/salt flux directly impacting the region in the Gulf of Lion where deep ocean convection, a key process in the thermohaline circulation of the Mediterranean Sea, can occur in Winter.
NASA Technical Reports Server (NTRS)
Emmons, T. E.
1976-01-01
The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.
Study of plasma environments for the integrated Space Station electromagnetic analysis system
NASA Technical Reports Server (NTRS)
Singh, Nagendra
1992-01-01
The final report includes an analysis of various plasma effects on the electromagnetic environment of the Space Station Freedom. Effects of arcing are presented. Concerns of control of arcing by a plasma contactor are highlighted. Generation of waves by contaminant ions are studied and amplitude levels of the waves are estimated. Generation of electromagnetic waves by currents in the structure of the space station, driven by motional EMF, is analyzed and the radiation level is estimated.
An efficiency-decay model for Lumen maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.
Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less
An efficiency-decay model for Lumen maintenance
Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.; ...
2016-08-25
Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less
The international food unit: a new measurement aid that can improve portion size estimation.
Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M
2017-09-12
Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.
Valuing Insect Pollination Services with Cost of Replacement
Allsopp, Mike H.; de Lange, Willem J.; Veldtman, Ruan
2008-01-01
Value estimates of ecosystem goods and services are useful to justify the allocation of resources towards conservation, but inconclusive estimates risk unsustainable resource allocations. Here we present replacement costs as a more accurate value estimate of insect pollination as an ecosystem service, although this method could also be applied to other services. The importance of insect pollination to agriculture is unequivocal. However, whether this service is largely provided by wild pollinators (genuine ecosystem service) or managed pollinators (commercial service), and which of these requires immediate action amidst reports of pollinator decline, remains contested. If crop pollination is used to argue for biodiversity conservation, clear distinction should be made between values of managed- and wild pollination services. Current methods either under-estimate or over-estimate the pollination service value, and make use of criticised general insect and managed pollinator dependence factors. We apply the theoretical concept of ascribing a value to a service by calculating the cost to replace it, as a novel way of valuing wild and managed pollination services. Adjusted insect and managed pollinator dependence factors were used to estimate the cost of replacing insect- and managed pollination services for the Western Cape deciduous fruit industry of South Africa. Using pollen dusting and hand pollination as suitable replacements, we value pollination services significantly higher than current market prices for commercial pollination, although lower than traditional proportional estimates. The complexity associated with inclusive value estimation of pollination services required several defendable assumptions, but made estimates more inclusive than previous attempts. Consequently this study provides the basis for continued improvement in context specific pollination service value estimates. PMID:18781196
Skylab S-193 radar altimeter experiment analyses and results
NASA Technical Reports Server (NTRS)
Brown, G. S. (Editor)
1977-01-01
The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.
Pańkowska, Ewa
2010-01-01
In this issue of Journal of Diabetes Science and Technology, Shapira and colleagues present new concepts of carbohydrate load estimation in intensive insulin therapy. By using a mathematical model, they attempt to establish how accurately carbohydrate food content should be maintained in order to keep postprandial blood glucose levels in the recommended range. Their mathematical formula, the “bolus guide” (BG), is verified by simulating prandial insulin dosing and responding to proper blood glucose levels. Different variants such as insulin sensitivity factor, insulin-to-carbohydrate ratio, and target blood glucose were taken into this formula in establishing the calculated proper insulin dose. The new approach presented here estimates the carbohydrate content by rearranging the carbohydrate load instead of the simple point estimation that the current bolus calculators (BCs) use. Computerized estimations show that the BG directives, as compared to a BC, result in more glucose levels above 200 mg/dl and thus indicate less hypoglycemia readings. PMID:20663454
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Polzin, Kurt A.; Bonds, Kevin W.; Emsellem, Gregory D.
2011-01-01
Results are presented demonstrating the e ect of inductive coil geometry and current sheet trajectory on the exhaust velocity of propellant in conical theta pinch pulsed induc- tive plasma accelerators. The electromagnetic coupling between the inductive coil of the accelerator and a plasma current sheet is simulated, substituting a conical copper frustum for the plasma. The variation of system inductance as a function of plasma position is obtained by displacing the simulated current sheet from the coil while measuring the total inductance of the coil. Four coils of differing geometries were employed, and the total inductance of each coil was measured as a function of the axial displacement of two sep- arate copper frusta both having the same cone angle and length as the coil but with one compressed to a smaller size relative to the coil. The measured relationship between total coil inductance and current sheet position closes a dynamical circuit model that is used to calculate the resulting current sheet velocity for various coil and current sheet con gura- tions. The results of this model, which neglects the pinching contribution to thrust, radial propellant con nement, and plume divergence, indicate that in a conical theta pinch ge- ometry current sheet pinching is detrimental to thruster performance, reducing the kinetic energy of the exhausting propellant by up to 50% (at the upper bound for the parameter range of the study). The decrease in exhaust velocity was larger for coils and simulated current sheets of smaller half cone angles. An upper bound for the pinching contribution to thrust is estimated for typical operating parameters. Measurements of coil inductance for three di erent current sheet pinching conditions are used to estimate the magnetic pressure as a function of current sheet radial compression. The gas-dynamic contribution to axial acceleration is also estimated and shown to not compensate for the decrease in axial electromagnetic acceleration that accompanies the radial compression of the plasma in conical theta pinches.
Essays on Economics of Education
ERIC Educational Resources Information Center
Kim, Bo Min
2013-01-01
This dissertation analyzes the effects of the educational programs for students who need special care in secondary and postsecondary school. These educational programs present serious endogeneity problems to a researcher estimating their causal effects, because most of students tried to avoid these programs. I extend the current literature in the…
Psycho-Social Aspects of Educating Epileptic Children: Roles for School Psychologists.
ERIC Educational Resources Information Center
Frank, Brenda B.
1985-01-01
Epileptic children may have physical and emotional needs which can interfere with learning and socialization. Current prevalence estimates, definitions, and classifications of epilepsy are surveyed. Factors affecting the epileptic child's school performance and specific learning problems are addressed. Specific roles are presented for school…
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
Population Trends for Washington State. 1995.
ERIC Educational Resources Information Center
Washington State Office of Financial Management, Olympia.
This document provides tables and figures of current demographic data for the state, counties, cities, and towns of Washington. The report is divided into two main sections: (1) "State, County, City Populations"; and (2) "Selected Estimates and Information". Section 1 presents such data as: population change and net migration…
Tetrabromobisphenol A (Tl3BPA) is currently the world's highest production volumebrominated flame retardant. Humans are frequently exposed to TBBPA by the dermal route. In the present study., a parallelogram approach was used to make predictions of internal dose in exposed humans...
Planted forests and plantations
Ray Sheffield
2009-01-01
Forest resource statistics from the 2000 Resources Planning Act (RPA) Assessment were updated to provide current information on the Nation's forests. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or State. Currrent resources data and trends...
Estimating wildfire behavior and effects
Frank A. Albini
1976-01-01
This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.
DOT National Transportation Integrated Search
1993-01-01
This paper describes the current structure of transportation finance in the Commonwealth. The financial structure is made up of estimated revenues and recommended allocations. We present comparisons of the shares of state and federal transportation r...
Long-term financing needs for HIV control in sub-Saharan Africa in 2015–2050: a modelling study
Atun, Rifat; Chang, Angela Y; Ogbuoji, Osondu; Silva, Sachin; Resch, Stephen; Hontelez, Jan; Bärnighausen, Till
2016-01-01
Objectives To estimate the present value of current and future funding needed for HIV treatment and prevention in 9 sub-Saharan African (SSA) countries that account for 70% of HIV burden in Africa under different scenarios of intervention scale-up. To analyse the gaps between current expenditures and funding obligation, and discuss the policy implications of future financing needs. Design We used the Goals module from Spectrum, and applied the most up-to-date cost and coverage data to provide a range of estimates for future financing obligations. The four different scale-up scenarios vary by treatment initiation threshold and service coverage level. We compared the model projections to current domestic and international financial sources available in selected SSA countries. Results In the 9 SSA countries, the estimated resources required for HIV prevention and treatment in 2015–2050 range from US$98 billion to maintain current coverage levels for treatment and prevention with eligibility for treatment initiation at CD4 count of <500/mm3 to US$261 billion if treatment were to be extended to all HIV-positive individuals and prevention scaled up. With the addition of new funding obligations for HIV—which arise implicitly through commitment to achieve higher than current treatment coverage levels—overall financial obligations (sum of debt levels and the present value of the stock of future HIV funding obligations) would rise substantially. Conclusions Investing upfront in scale-up of HIV services to achieve high coverage levels will reduce HIV incidence, prevention and future treatment expenditures by realising long-term preventive effects of ART to reduce HIV transmission. Future obligations are too substantial for most SSA countries to be met from domestic sources alone. New sources of funding, in addition to domestic sources, include innovative financing. Debt sustainability for sustained HIV response is an urgent imperative for affected countries and donors. PMID:26948960
Angular approach combined to mechanical model for tool breakage detection by eddy current sensors
NASA Astrophysics Data System (ADS)
Ritou, M.; Garnier, S.; Furet, B.; Hascoet, J. Y.
2014-02-01
The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach.Then, a new solution is proposed for the estimate of cutting force using eddy current sensors implemented close to spindle nose. Signals are analysed in the angular domain, notably by synchronous averaging technique. Phase shifts induced by changes of machining direction are compensated. Results are compared with cutting forces measured with a dynamometer table.The proposed method is implemented in an industrial case of pocket machining operation. One of the cutting edges has been slightly damaged during the machining, as shown by a direct measurement of the tool. A control chart is established with the estimates of cutter eccentricity obtained during the machining from the eddy current sensors signals. Efficiency and reliability of the method is demonstrated by a successful detection of the damage.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
NASA Astrophysics Data System (ADS)
Gillies, D. M.; Knudsen, D. J.; Donovan, E.; Jackel, B. J.; Gillies, R.; Spanswick, E.
2017-12-01
We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) to derive a characteristic emission height for the optical emissions. In our 10 events we find that an altitude of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar (RISR-C), both of which are consistent with a characteristic emission height of 200 km. We also present the spatial error associated with georeferencing REdline Geospace Observatory (REGO) and THEMIS all-sky imagers (ASIs) and how it applies to altitude projections of the mapped image. Utilizing this error we validate the estimated altitude of redline aurora using two methods: triangulation between ASIs and field-aligned current profiles derived from magnetometers on-board the Swarm satellites.
"Flash" dance: how speed modulates percieved duration in dancers and non-dancers.
Sgouramani, Helena; Vatakis, Argiro
2014-03-01
Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.
A novel method for state of charge estimation of lithium-ion batteries using a nonlinear observer
NASA Astrophysics Data System (ADS)
Xia, Bizhong; Chen, Chaoren; Tian, Yong; Sun, Wei; Xu, Zhihui; Zheng, Weiwei
2014-12-01
The state of charge (SOC) is important for the safety and reliability of battery operation since it indicates the remaining capacity of a battery. However, as the internal state of each cell cannot be directly measured, the value of the SOC has to be estimated. In this paper, a novel method for SOC estimation in electric vehicles (EVs) using a nonlinear observer (NLO) is presented. One advantage of this method is that it does not need complicated matrix operations, so the computation cost can be reduced. As a key step in design of the nonlinear observer, the state-space equations based on the equivalent circuit model are derived. The Lyapunov stability theory is employed to prove the convergence of the nonlinear observer. Four experiments are carried out to evaluate the performance of the presented method. The results show that the SOC estimation error converges to 3% within 130 s while the initial SOC error reaches 20%, and does not exceed 4.5% while the measurement suffers both 2.5% voltage noise and 5% current noise. Besides, the presented method has advantages over the extended Kalman filter (EKF) and sliding mode observer (SMO) algorithms in terms of computation cost, estimation accuracy and convergence rate.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Transport Structure and Energetic of the North Atlantic Current in Subpolar Gyre from Observations
NASA Astrophysics Data System (ADS)
Houpert, Loïc; Inall, Mark; Dumont, Estelle; Gary, Stefan; Porter, Marie; Johns, William; Cunningham, Stuart
2017-04-01
We present the first 2 years of UK-OSNAP glider missions on the Rockall Plateau in the North Atlantic subpolar gyre. From July 2014 to August 2016, 20 gliders sections were realized along 58°N, between 22°W and 15°W. Depth-averaged currents estimated from gliders show very strong values (up to 45cm.s-1) associated with meso-scale variability, due particularly to eddies and subpolar mode water formation. The variability of the flow on the eastern slope of the Iceland basin and on the Rockall Plateau is presented. Meridional absolute geostrophic transports are calculated from the glider data, and we discuss the vertical structure of the absolute meridional transport, especially the part associated with the North Atlantic Current.
Krivoshei, A; Uuetoa, H; Min, M; Annus, P; Uuetoa, T; Lamp, J
2015-08-01
The paper presents analysis of the generic transfer function (TF) between Electrical Bioimpedance (EBI) measured non-invasively on the wrist and Central Aortic Pressure (CAP) invasively measured at the aortic root. Influence of the Heart Rate (HR) variations on the generic TF and on reconstructed CAP waveforms is investigated. The HR variation analysis is provided on a single patient data to exclude inter-patient influences at the current research stage. A new approach for the generic TF estimating from a data ensemble is presented as well. Moreover, an influence of the cardiac period beginning point selection is analyzed and empirically optimal solution for its selection is proposed.
Counterbalance of cutting force for advanced milling operations
NASA Astrophysics Data System (ADS)
Tsai, Nan-Chyuan; Shih, Li-Wen; Lee, Rong-Mao
2010-05-01
The goal of this work is to concurrently counterbalance the dynamic cutting force and regulate the spindle position deviation under various milling conditions by integrating active magnetic bearing (AMB) technique, fuzzy logic algorithm and an adaptive self-tuning feedback loop. Since the dynamics of milling system is highly determined by a few operation conditions, such as speed of spindle, cut depth and feedrate, therefore the dynamic model for cutting process is more appropriate to be constructed by experiments, instead of using theoretical approach. The experimental data, either for idle or cutting, are utilized to establish the database of milling dynamics so that the system parameters can be on-line estimated by employing the proposed fuzzy logic algorithm as the cutting mission is engaged. Based on the estimated milling system model and preset operation conditions, i.e., spindle speed, cut depth and feedrate, the current cutting force can be numerically estimated. Once the current cutting force can be real-time estimated, the corresponding compensation force can be exerted by the equipped AMB to counterbalance the cutting force, in addition to the spindle position regulation by feedback of spindle position. On the other hand, for the magnetic force is nonlinear with respect to the applied electric current and air gap, the characteristics of the employed AMB is investigated also by experiments and a nonlinear mathematic model, in terms of air gap between spindle and electromagnetic pole and coil current, is developed. At the end, the experimental simulations on realistic milling are presented to verify the efficacy of the fuzzy controller for spindle position regulation and the capability of the dynamic cutting force counterbalance.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Rock bed thermal storage: Concepts and costs
NASA Astrophysics Data System (ADS)
Allen, Kenneth; von Backström, Theodor; Joubert, Eugene; Gauché, Paul
2016-05-01
Thermal storage enables concentrating solar power (CSP) plants to provide baseload or dispatchable power. Currently CSP plants use two-tank molten salt thermal storage, with estimated capital costs of about 22-30 /kWhth. In the interests of reducing CSP costs, alternative storage concepts have been proposed. In particular, packed rock beds with air as the heat transfer fluid offer the potential of lower cost storage because of the low cost and abundance of rock. Two rock bed storage concepts which have been formulated for use at temperatures up to at least 600 °C are presented and a brief analysis and cost estimate is given. The cost estimate shows that both concepts are capable of capital costs less than 15 /kWhth at scales larger than 1000 MWhth. Depending on the design and the costs of scaling containment, capital costs as low as 5-8 /kWhth may be possible. These costs are between a half and a third of current molten salt costs.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Artificial neural networks for AC losses prediction in superconducting round filaments
NASA Astrophysics Data System (ADS)
Leclerc, J.; Makong Hell, L.; Lorin, C.; Masson, P. J.
2016-06-01
An extensive and fast method to estimate superconducting AC losses within a superconducting round filament carrying an AC current and subjected to an elliptical magnetic field (both rotating and oscillating) is presented. Elliptical fields are present in rotating machine stators and being able to accurately predict AC losses in fully superconducting machines is paramount to generating realistic machine designs. The proposed method relies on an analytical scaling law (ASL) combined with two artificial neural network (ANN) estimators taking 9 input parameters representing the superconductor, external field and transport current characteristics. The ANNs are trained with data generated by finite element (FE) computations with a commercial software (FlexPDE) based on the widely accepted H-formulation. After completion, the model is validated through comparison with additional randomly chosen data points and compared for simple field configurations to other predictive models. The loss estimation discrepancy is about 3% on average compared to the FEA analysis. The main advantages of the model compared to FE simulations is the fast computation time (few milliseconds) which allows it to be used in iterated design processes of fully superconducting machines. In addition, the proposed model provides a higher level of fidelity than the scaling laws existing in literature usually only considering pure AC field.
Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.
Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E
1999-05-20
The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.
Estimation of vulnerability functions based on a global earthquake damage database
NASA Astrophysics Data System (ADS)
Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.
2009-04-01
Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human casualties will also be assembled. The database also contains analytical tools enabling data from similar locations, building classes or ground motion levels to be assembled and thus vulnerability relationships derived for any chosen ground motion parameter, for a given class of building, and for particular countries or regions. The paper presents examples of vulnerability relationships for particular classes of buildings and regions of the world, together with the estimated uncertainty ranges. It will discuss the applicability of such vulnerability functions in earthquake loss assessment for insurance purposes or for earthquake risk mitigation.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics.
Boedo, J A; Rudakov, D L
2017-03-01
We present a method to calculate the ion saturation current, I sat , for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat . It is noted that the I sat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e . We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuously biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and its use in reducing arcs.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boedo, J. A.; Rudakov, D. L.
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
Boedo, J. A.; Rudakov, D. L.
2017-03-20
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Improved estimation of random vibration loads in launch vehicles
NASA Technical Reports Server (NTRS)
Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.
1993-01-01
Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.
Manned Mars mission cost estimate
NASA Technical Reports Server (NTRS)
Hamaker, Joseph; Smith, Keith
1986-01-01
The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
DEVELOPMENT OF A REFINED DATABASE OF MAMMALIAN RELATIVE POTENCY ESTIMATES FOR DIOXIN-LIKE COMPOUNDS
The toxic equivalency factor (TEF) approach has been widely accepted as the most feasible method available at present for evaluating potential health risks associated with exposure to mixtures of dioxin-like compounds (DLCs). The current mammalian TEFs for the DLCs were establis...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, S.
This report summarizes the current status of the CPV industry and is updated from previous versions to include information from the last year. New information presented at the CPV-8 conference is included along with the addition of new companies that have announced their interest in CPV, and estimates of production volumes for 2011 and 2012.
Busch, D. Shallin; McElhany, Paul
2016-01-01
Ocean acidification (OA) has the potential to restructure ecosystems due to variation in species sensitivity to the projected changes in ocean carbon chemistry. Ecological models can be forced with scenarios of OA to help scientists, managers, and other stakeholders understand how ecosystems might change. We present a novel methodology for developing estimates of species sensitivity to OA that are regionally specific, and applied the method to the California Current ecosystem. To do so, we built a database of all published literature on the sensitivity of temperate species to decreased pH. This database contains 393 papers on 285 species and 89 multi-species groups from temperate waters around the world. Research on urchins and oysters and on adult life stages dominates the literature. Almost a third of the temperate species studied to date occur in the California Current. However, most laboratory experiments use control pH conditions that are too high to represent average current chemistry conditions in the portion of the California Current water column where the majority of the species live. We developed estimates of sensitivity to OA for functional groups in the ecosystem, which can represent single species or taxonomically diverse groups of hundreds of species. We based these estimates on the amount of available evidence derived from published studies on species sensitivity, how well this evidence could inform species sensitivity in the California Current ecosystem, and the agreement of the available evidence for a species/species group. This approach is similar to that taken by the Intergovernmental Panel on Climate Change to characterize certainty when summarizing scientific findings. Most functional groups (26 of 34) responded negatively to OA conditions, but when uncertainty in sensitivity was considered, only 11 groups had relationships that were consistently negative. Thus, incorporating certainty about the sensitivity of species and functional groups to OA is an important part of developing robust scenarios for ecosystem projections. PMID:27513576
Shipborne LF-VLF oceanic lightning observations and modeling
NASA Astrophysics Data System (ADS)
Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.
2015-10-01
Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
A labview-based GUI for the measurement of otoacoustic emissions.
Wu, Ye; McNamara, D M; Ziarani, A K
2006-01-01
This paper presents the outcome of a software development project aimed at creating a stand-alone user-friendly signal processing algorithm for the estimation of distortion product otoacoustic emission (OAE) signals. OAE testing is one of the most commonly used methods of first screening of newborns' hearing. Most of the currently available commercial devices rely upon averaging long strings of data and subsequent discrete Fourier analysis to estimate low level OAE signals from within the background noise in the presence of the strong stimuli. The main shortcoming of the presently employed technology is the need for long measurement time and its low noise immunity. The result of the software development project presented here is a graphical user interface (GUI) module that implements a recently introduced adaptive technique of OAE signal estimation. This software module is easy to use and is freely disseminated on the Internet for the use of the hearing research community. This GUI module allows loading of the a priori recorded OAE signals into the workspace, and provides the user with interactive instructions for the OAE signal estimation. Moreover, the user can generate simulated OAE signals to objectively evaluate the performance capability of the implemented signal processing technique.
Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.
2014-01-01
Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.
An exploration of multilevel modeling for estimating access to drinking-water and sanitation.
Wolf, Jennyfer; Bonjour, Sophie; Prüss-Ustün, Annette
2013-03-01
Monitoring progress towards the targets for access to safe drinking-water and sanitation under the Millennium Development Goals (MDG) requires reliable estimates and indicators. We analyzed trends and reviewed current indicators used for those targets. We developed continuous time series for 1990 to 2015 for access to improved drinking-water sources and improved sanitation facilities by country using multilevel modeling (MLM). We show that MLM is a reliable and transparent tool with many advantages over alternative approaches to estimate access to facilities. Using current indicators, the MDG target for water would be met, but the target for sanitation missed considerably. The number of people without access to such services is still increasing in certain regions. Striking differences persist between urban and rural areas. Consideration of water quality and different classification of shared sanitation facilities would, however, alter estimates considerably. To achieve improved monitoring we propose: (1) considering the use of MLM as an alternative for estimating access to safe drinking-water and sanitation; (2) completing regular assessments of water quality and supporting the development of national regulatory frameworks as part of capacity development; (3) evaluating health impacts of shared sanitation; (4) using a more equitable presentation of countries' performances in providing improved services.
An algorithm for estimating aerosol optical depth from HIMAWARI-8 data over Ocean
NASA Astrophysics Data System (ADS)
Lee, Kwon Ho
2016-04-01
The paper presents currently developing algorithm for aerosol detection and retrieval over ocean for the next generation geostationary satellite, HIMAWARI-8. Enhanced geostationary remote sensing observations are now enables for aerosol retrieval of dust, smoke, and ash, which began a new era of geostationary aerosol observations. Sixteen channels of the Advanced HIMAWARI Imager (AHI) onboard HIMAWARI-8 offer capabilities for aerosol remote sensing similar to those currently provided by the Moderate Resolution Imaging Spectroradiometer (MODIS). Aerosols were estimated in detection processing from visible and infrared channel radiances, and in retrieval processing using the inversion-optimization of satellite-observed radiances with those calculated from radiative transfer model. The retrievals are performed operationally every ten minutes for pixel sizes of ~8 km. The algorithm currently under development uses a multichannel approach to estimate the effective radius, aerosol optical depth (AOD) simultaneously. The instantaneous retrieved AOD is evaluated by the MODIS level 2 operational aerosol products (C006), and the daily retrieved AOD was compared with ground-based measurements from the AERONET databases. The results show that the detection of aerosol and estimated AOD are in good agreement with the MODIS data and ground measurements with a correlation coefficient of ˜0.90 and a bias of 4%. These results suggest that the proposed method applied to the HIMAWARI-8 satellite data can accurately estimate continuous AOD. Acknowledgments This work was supported by "Development of Geostationary Meteorological Satellite Ground Segment(NMSC-2014-01)" program funded by National Meteorological Satellite Centre(NMSC) of Korea Meteorological Administration(KMA).
Adult current smoking: differences in definitions and prevalence estimates--NHIS and NSDUH, 2008.
Ryan, Heather; Trosclair, Angela; Gfroerer, Joe
2012-01-01
To compare prevalence estimates and assess issues related to the measurement of adult cigarette smoking in the National Health Interview Survey (NHIS) and the National Survey on Drug Use and Health (NSDUH). 2008 data on current cigarette smoking and current daily cigarette smoking among adults ≥18 years were compared. The standard NHIS current smoking definition, which screens for lifetime smoking ≥100 cigarettes, was used. For NSDUH, both the standard current smoking definition, which does not screen, and a modified definition applying the NHIS current smoking definition (i.e., with screen) were used. NSDUH consistently yielded higher current cigarette smoking estimates than NHIS and lower daily smoking estimates. However, with use of the modified NSDUH current smoking definition, a notable number of subpopulation estimates became comparable between surveys. Younger adults and racial/ethnic minorities were most impacted by the lifetime smoking screen, with Hispanics being the most sensitive to differences in smoking variable definitions among all subgroups. Differences in current cigarette smoking definitions appear to have a greater impact on smoking estimates in some sub-populations than others. Survey mode differences may also limit intersurvey comparisons and trend analyses. Investigators are cautioned to use data most appropriate for their specific research questions.
Alpha effect of Alfv{acute e}n waves and current drive in reversed-field pinches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litwin, C.; Prager, S.C.
Circularly polarized Alfv{acute e}n waves give rise to an {alpha}-dynamo effect that can be exploited to drive parallel current. In a {open_quotes}laminar{close_quotes} magnetic the effect is weak and does not give rise to significant currents for realistic parameters (e.g., in tokamaks). However, in reversed-field pinches (RFPs) in which magnetic field in the plasma core is stochastic, a significant enhancement of the {alpha} effect occurs. Estimates of this effect show that it may be a realistic method of current generation in the present-day RFP experiments and possibly also in future RFP-based fusion reactors. {copyright} {ital 1998 American Institute of Physics.}
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Kim, Do-Won; Lee, Seung-Hwan; Shim, Miseon; Im, Chang-Hwan
2017-01-01
Precise diagnosis of psychiatric diseases and a comprehensive assessment of a patient's symptom severity are important in order to establish a successful treatment strategy for each patient. Although great efforts have been devoted to searching for diagnostic biomarkers of schizophrenia over the past several decades, no study has yet investigated how accurately these biomarkers are able to estimate an individual patient's symptom severity. In this study, we applied electrophysiological biomarkers obtained from electroencephalography (EEG) analyses to an estimation of symptom severity scores of patients with schizophrenia. EEG signals were recorded from 23 patients while they performed a facial affect discrimination task. Based on the source current density analysis results, we extracted voxels that showed a strong correlation between source activity and symptom scores. We then built a prediction model to estimate the symptom severity scores of each patient using the source activations of the selected voxels. The symptom scores of the Positive and Negative Syndrome Scale (PANSS) were estimated using the linear prediction model. The results of leave-one-out cross validation (LOOCV) showed that the mean errors of the estimated symptom scores were 3.34 ± 2.40 and 3.90 ± 3.01 for the Positive and Negative PANSS scores, respectively. The current pilot study is the first attempt to estimate symptom severity scores in schizophrenia using quantitative EEG features. It is expected that the present method can be extended to other cognitive paradigms or other psychological illnesses.
Postmortem time estimation using body temperature and a finite-element computer model.
den Hartog, Emiel A; Lotens, Wouter A
2004-09-01
In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.
Sato, Masashi; Yamashita, Okito; Sato, Masa-aki
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968
NASA Astrophysics Data System (ADS)
Purohit, Pallav; Hoglund-Isaksson, Lena
2016-04-01
The anthropogenic fluorinated (F-gases) greenhouse gas emissions have increased significantly in recent years and are estimated to rise further in response to increased demand for cooling services and the phase out of ozone-depleting substances (ODS) under the Montreal Protocol. F-gases (HFCs, PFCs and SF6) are potent greenhouse gases, with a global warming effect up to 22,800 times greater than carbon dioxide (CO2). This study presents estimates of current and future global emissions of F-gases, their technical mitigation potential and associated costs for the period 2005 to 2050. The analysis uses the GAINS model framework to estimate emissions, mitigation potentials and costs for all major sources of anthropogenic F-gases for 162 countries/regions, which are aggregated to produce global estimates. For each region, 18 emission source sectors with mitigation potentials and costs were identified. Global F-gas emissions are estimated at 0.7 Gt CO2eq in 2005 with an expected increase to about 3.6 Gt CO2eq in 2050. There are extensive opportunities to reduce emissions by over 95 percent primarily through replacement with existing low GWP substances. The initial results indicate that at least half of the mitigation potential is attainable at a cost of less than 20€ per t CO2eq, while almost 90 percent reduction is attainable at less than 100€ per t CO2eq. Currently, several policy proposals have been presented to amend the Montreal Protocol to substantially curb global HFC use. We analyze the technical potentials and costs associated with the HFC mitigation required under the different proposed Montreal Protocol amendments.
NASA Astrophysics Data System (ADS)
Golinkoff, Jordan Seth
The accurate estimation of forest attributes at many different spatial scales is a critical problem. Forest landowners may be interested in estimating timber volume, forest biomass, and forest structure to determine their forest's condition and value. Counties and states may be interested to learn about their forests to develop sustainable management plans and policies related to forests, wildlife, and climate change. Countries and consortiums of countries need information about their forests to set global and national targets to deal with issues of climate change and deforestation as well as to set national targets and understand the state of their forest at a given point in time. This dissertation approaches these questions from two perspectives. The first perspective uses the process model Biome-BGC paired with inventory and remote sensing data to make inferences about a current forest state given known climate and site variables. Using a model of this type, future climate data can be used to make predictions about future forest states as well. An example of this work applied to a forest in northern California is presented. The second perspective of estimating forest attributes uses high resolution aerial imagery paired with light detection and ranging (LiDAR) remote sensing data to develop statistical estimates of forest structure. Two approaches within this perspective are presented: a pixel based approach and an object based approach. Both approaches can serve as the platform on which models (either empirical growth and yield models or process models) can be run to generate inferences about future forest state and current forest biogeochemical cycling.
B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current
Studies on space charge neutralization and emittance measurement of beam from microwave ion source.
Misra, Anuraag; Goswami, A; Sing Babu, P; Srivastava, S; Pandit, V S
2015-11-01
A 2.45 GHz microwave ion source together with a beam transport system has been developed at VECC to study the problems related with the injection of high current beam into a compact cyclotron. This paper presents the results of beam profile measurement of high current proton beam at different degrees of space charge neutralisation with the introduction of neon gas in the beam line using a fine leak valve. The beam profiles have been measured at different pressures in the beam line by capturing the residual gas fluorescence using a CCD camera. It has been found that with space charge compensation at the present current level (∼5 mA at 75 keV), it is possible to reduce the beam spot size by ∼34%. We have measured the variation of beam profile as a function of the current in the solenoid magnet under the neutralised condition and used these data to estimate the rms emittance of the beam. Simulations performed using equivalent Kapchinsky-Vladimirsky beam envelope equations with space charge neutralization factor are also presented to interpret the experimental results.
Studies on space charge neutralization and emittance measurement of beam from microwave ion source
NASA Astrophysics Data System (ADS)
Misra, Anuraag; Goswami, A.; Sing Babu, P.; Srivastava, S.; Pandit, V. S.
2015-11-01
A 2.45 GHz microwave ion source together with a beam transport system has been developed at VECC to study the problems related with the injection of high current beam into a compact cyclotron. This paper presents the results of beam profile measurement of high current proton beam at different degrees of space charge neutralisation with the introduction of neon gas in the beam line using a fine leak valve. The beam profiles have been measured at different pressures in the beam line by capturing the residual gas fluorescence using a CCD camera. It has been found that with space charge compensation at the present current level (˜5 mA at 75 keV), it is possible to reduce the beam spot size by ˜34%. We have measured the variation of beam profile as a function of the current in the solenoid magnet under the neutralised condition and used these data to estimate the rms emittance of the beam. Simulations performed using equivalent Kapchinsky-Vladimirsky beam envelope equations with space charge neutralization factor are also presented to interpret the experimental results.
Mathews, Melissa; Abner, Erin; Caban-Holt, Allison; Dennis, Brandon C; Kryscio, Richard; Schmitt, Frederick
2013-09-01
Memory evaluation is a key component in the accurate diagnosis of cognitive disorders.One memory procedure that has shown promise in discriminating disease-related cognitive decline from normal cognitive aging is the New York University Paragraph Recall Test; however, the effects of education have been unexamined as they pertain to one's literacy level. The current study provides normative data stratified by estimated quality of education as indexed by irregular word reading skill. Conventional norms were derived from a sample (N = 385) of cognitively intact elderly men who were initially recruited for participation in the PREADViSE clinical trial. A series of multiple linear regression models were constructed to assess the influence of demographic variables on mean NYU Paragraph Immediate and Delayed Recall scores. Test version, assessment site, and estimated quality of education were significant predictors of performance on the NYU Paragraph Recall Test. Findings indicate that estimated quality of education is a better predictor of memory performance than ethnicity and years of total education. Normative data stratified according to estimated quality of education are presented. The current study provides evidence and support for normativedata stratified by quality of education as opposed to years of education.
Radial particle distributions in PARMILA simulation beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boicourt, G.P.
1984-03-01
The estimation of beam spill in particle accelerators is becoming of greater importance as higher current designs are being funded. To the present, no numerical method for predicting beam-spill has been available. In this paper, we present an approach to the loss-estimation problem that uses probability distributions fitted to particle-simulation beams. The properties of the PARMILA code's radial particle distribution are discussed, and a broad class of probability distributions are examined to check their ability to fit it. The possibility that the PARMILA distribution is a mixture is discussed, and a fitting distribution consisting of a mixture of two generalizedmore » gamma distributions is found. An efficient algorithm to accomplish the fit is presented. Examples of the relative prediction of beam spill are given. 26 references, 18 figures, 1 table.« less
NASA Technical Reports Server (NTRS)
Eckert, W. T.; Mort, K. W.; Jope, J.
1976-01-01
General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2014-01-01
Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
NASA Technical Reports Server (NTRS)
Strub, P. Ted; James, Corinne; Thomas, Andrew C.; Abbott, Mark R.
1990-01-01
The large-scale patterns of satellite-derived surface pigment concentration off the west coast of North America are presented and are averaged into monthly mean surface wind fields over the California Current system (CCS) for the July 1979 to June 1986 period. The patterns are discussed in terms of both seasonal and nonseasonal variability for the indicated time period. The large-scale seasonal characteristics of the California Current are summarized. The data and methods used are described, and the problems known to affect the satellite-derived pigment concentrations and the wind data used in the study are discussed. The statistical analysis results are then presented and discussed in light of past observations and theory. Details of the CZCS data processing are described, and details of the principal estimator pattern methodology used here are given.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Gasbarra, Dario; Arjas, Elja; Vehtari, Aki; Slama, Rémy; Keiding, Niels
2015-10-01
This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators, which have been used previously in this context. The properties of the Bayesian estimation method are studied also empirically, using both simulated data and TTP data on current durations collected by Slama et al. (Hum Reprod 27(5):1489-1498, 2012).
A phenomenological study of photon production in low energy neutrino nucleon scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, James P; Goldman, Terry J
2009-01-01
Low energy photon production is an important background to many current and future precision neutrino experiments. We present a phenomenological study of t-channel radiative corrections to neutral current neutrino nucleus scattering. After introducing the relevant processes and phenomenological coupling constants, we will explore the derived energy and angular distributions as well as total cross-section predictions along with their estimated uncertainties. This is supplemented throughout with comments on possible experimental signatures and implications. We conclude with a general discussion of the analysis in the context of complimentary methodologies. This is based on a talk presented at the DPF 2009 meeting inmore » Detroit MI.« less
Sliding mode observers for automotive alternator
NASA Astrophysics Data System (ADS)
Chen, De-Shiou
Estimator development for synchronous rectification of the automotive alternator is a desirable approach for estimating alternator's back electromotive forces (EMFs) without a direct mechanical sensor of the rotor position. Recent theoretical studies show that estimation of the back EMF may be observed based on system's phase current model by sensing electrical variables (AC phase currents and DC bus voltage) of the synchronous rectifier. Observer design of the back EMF estimation has been developed for constant engine speed. In this work, we are interested in nonlinear observer design of the back EMF estimation for the real case of variable engine speed. Initial back EMF estimate can be obtained from a first-order sliding mode observer (SMO) based on the phase current model. A fourth-order nonlinear asymptotic observer (NAO), complemented by the dynamics of the back EMF with time-varying frequency and amplitude, is then incorporated into the observer design for chattering reduction. Since the cost of required phase current sensors may be prohibitive, the most applicable approach in real implementation by measuring DC current of the synchronous rectifier is carried out in the dissertation. It is shown that the DC link current consists of sequential "windows" with partial information of the phase currents, hence, the cascaded NAO is responsible not only for the purpose of chattering reduction but also for necessarily accomplishing the process of estimation. Stability analyses of the proposed estimators are considered for most linear and time-varying cases. The stability of the NAO without speed information is substantiated by both numerical and experimental results. Prospective estimation algorithms for the case of battery current measurements are investigated. Theoretical study indicates that the convergence of the proposed LAO may be provided by high gain inputs. Since the order of the LAO/NAO for the battery current case is one order higher than that of the link current measurements, it is hard to find moderate values of the input gains for the real-time sampled-data systems. Technical difficulties in implementation of such high order discrete-time nonlinear estimators have been discussed. Directions of further investigations have been provided.
Current Term Enrollment Estimates: Spring 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center, include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state, and…
Current Term Enrollment Estimates: Fall 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center (NSCRC), include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state,…
NASA Astrophysics Data System (ADS)
Jiang, Y.; Wu, X.; van den Broeke, M. R.; Munneke, P. K.; Simonsen, S. B.; van der Wal, W.; Vermeersen, B. L.
2013-12-01
The ice sheet in Polar Regions stores the largest freshwater bodies on Earth, sufficient to elevate global sea level by more than 65 meters if melted. The earth may have entered an intensive ice-melting episode, possibly due to anthropogenic global warming rather than natural orbit variations. Determining present-day ice mass balance, however, is complicated by the fact that most observations contain both present day ice melting signal and residual signals from past glacier melting. Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from past melting is to include data of different physical characteristics. We conducted a new global kinematic inversion scheme to estimate both present-day ice melting and past glacier signatures simultaneously and assess their contribution to current and future global mean sea level change. Our approach is designed to invert and separate present-day melting signal in the spherical harmonic domain using a globally distributed interdisciplinary data with distinct physical information. Interesting results with unprecedented precisions have been achieved so far. We will present our results of the estimated present-day ice mass balance trend in both Greenland and Antarctica ice sheet as well as other regions where significant mass change occurs.
Residential demand for energy. Volume 1: Residential energy demand in the US
NASA Astrophysics Data System (ADS)
Taylor, L. D.; Blattenberger, G. R.; Rennhack, R. K.
1982-04-01
Updated and improved versions of the residential energy demand models that are currently used in EPRI's Demand 80/81 Model are presented. The primary objective of the study is the development and estimation of econometric demand models that take into account in a theoretically appropriate way the problems caused by decreasing-block pricing in the sale of electricity and natural gas. An ancillary objective is to take into account the impact on electricity, natural gas, and fuel oil demands of differences and changes in the availability of natural gas. Econometric models of residential demand are estimated for all three fuel tyes using time series data by state. Price and income elasticities for a number of alternative models are presented.
Assessing the vertical structure of baroclinic tidal currents in a global model
NASA Astrophysics Data System (ADS)
Timko, Patrick; Arbic, Brian; Scott, Robert
2010-05-01
Tidal forcing plays an important role in many aspects of oceanography. Mixing, transport of particulates and internal wave generation are just three examples of local phenomena that may depend on the strength of local tidal currents. Advances in satellite altimetry have made an assessment of the global barotropic tide possible. However, the vertical structure of the tide may only be observed by deployment of instruments throughout the water column. Typically these observations are conducted at pre-determined depths based upon the interest of the observer. The high cost of such observations often limits both the number and the length of the observations resulting in a limit to our knowledge of the vertical structure of tidal currents. One way to expand our insight into the baroclinic structure of the ocean is through the use of numerical models. We compare the vertical structure of the global baroclinic tidal velocities in 1/12 degree HYCOM (HYbrid Coordinate Ocean Model) to a global database of current meter records. The model output is a subset of a 5 year global simulation that resolves the eddying general circulation, barotropic tides and baroclinic tides using 32 vertical layers. The density structure within the simulation is both vertically and horizontally non-uniform. In addition to buoyancy forcing the model is forced by astronomical tides and winds. We estimate the dominant semi-diurnal (M2), and diurnal (K1) tidal constituents of the model data using classical harmonic analysis. In regions where current meter record coverage is adequate, the model skill in replicating the vertical structure of the dominant diurnal and semi-diurnal tidal currents is assessed based upon the strength, orientation and phase of the tidal ellipses. We also present a global estimate of the baroclinic tidal energy at fixed depths estimated from the model output.
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
Understanding global tropospheric ozone and its impacts on human health
NASA Astrophysics Data System (ADS)
West, J. J.
2017-12-01
Ozone is an important air pollutant for human health, one that has proven difficult to manage locally, nationally, and globally. Here I will present research on global ozone and its impacts on human health, highlighting several studies from my lab over the past decade. I will discuss the drivers of global tropospheric ozone, and the importance of the equatorward shift of emissions over recent decades. I will review estimates of the global burden of ozone on premature mortality, the contributions of different emission sectors to that burden, estimates of how the ozone health burden will change in the future under the Representative Concentration Pathway scenarios, and estimates of the contribution of projected climate change to ozone-related deaths. I will also discuss the importance of the intercontinental transport of ozone, and of methane as a driver of global ozone, from the human health perspective. I will present estimates of trends in the ozone mortality burden in the United States since 1990. Finally, I will discuss our project currently underway to estimate global ozone concentrations at the surface based on data gathered by the Tropospheric Ozone Assessment Report, combined statistically with atmospheric modeling results.
Forest resources of the United States, 1992
Douglas S. Powell; Joanne L. Faulkner; David R. Darr; Zhiliang Zhu; Douglas W. MacCleery
1993-01-01
The 1987 Resources Planning Act (RPA) Assessment forest resources statistics are updated to 1992, to provide current information on the Nation's forests. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output. Resource data are analyzed, and trends since 1987 are noted. A forest type map produced from...
Marital Status and Living Arrangements: March 1985.
ERIC Educational Resources Information Center
Saluter, Arlene F.
1986-01-01
This report presents detailed information on the marital status and living arrangements of the noninstitutional population of the United States by age, sex, race, and Spanish origin. The text of this report compares the mid-decade census estimates based on the March, 1985 "Current Population Survey" with the survey data from 1980, 1970, and 1960.…
Assessment of Vitamin D in multivitamin/mineral dietary supplements
USDA-ARS?s Scientific Manuscript database
Vitamin D is a nutrient of public health concern and is naturally present in some foods, added to others, and available in dietary supplements. It is essential for bone growth and may have other roles in human health. To estimate current levels of intake, analytical data for vitamin D in foods and...
Trends in College Spending: 2001-2011. A Delta Data Update
ERIC Educational Resources Information Center
Desrochers, Donna M.; Hurlburt, Steven
2014-01-01
This "Trends in College Spending" update presents national-level estimates for the "Delta Cost Project" data metrics during the period 2001-11. To accelerate the release of more current trend data, however, this update includes only a brief summary of the financial patterns and trends observed during the decade 2001-11, with…
DNA-based approach to aging martens (Martes americana and M. caurina)
Jonathan N. Pauli; John P. Whiteman; Bruce G. Marcot; Terry M. McClean; Merav Ben-David
2011-01-01
Demographic structure is central to understanding the dynamics of animal populations. However, determining the age of free-ranging mammals is difficult, and currently impossible when sampling with noninvasive, genetic-based approaches. We present a method to estimate age class by combining measures of telomere lengths with other biologically meaningful covariates in a...
Lifetime Net Merit vs. annualized net present value as measures of profitability of selection
USDA-ARS?s Scientific Manuscript database
Current USDA linear selection indexes such as Lifetime Net Merit (NM$) estimate lifetime profit given a combination of 13 traits. In these indexes, every animal gets credit for 2.78 lactations of the traits expressed per lactation, independent of its productive life (PL). Selection among animals wit...
The Financial Value of a Higher Education
ERIC Educational Resources Information Center
Kantrowitz, Mark
2007-01-01
Five years have passed since the U.S. Census Bureau published synthetic estimates of work-life earnings by educational attainment. This paper updates those figures with the most recent data from the U.S. Census Bureau's annual Current Population Surveys, and adds net present value analysis of the financial benefit of a college degree to the…
Estimating p-n Diode Bulk Parameters, Bandgap Energy and Absolute Zero by a Simple Experiment
ERIC Educational Resources Information Center
Ocaya, R. O.; Dejene, F. B.
2007-01-01
This paper presents a straightforward but interesting experimental method for p-n diode characterization. The method differs substantially from many approaches in diode characterization by offering much tighter control over the temperature and current variables. The method allows the determination of important diode constants such as temperature…
Forest Resources of the United States, 1997, METRIC UNITS.
W. Brad Smith; John S. Vissage; Davie R. Darr; Raymond M. Sheffield
2002-01-01
Forest resource statistics from the 1987 Forest Resources Planning Act (RPA) Asessment were updated to 1997 to provide current information on the Nation's forests. Resource tables present estimates in metric measure of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or State.
Forest statistics of the United States, 1992 metric units.
W. Brad Smith; Joanne L. Faulkner; Douglas S. Powell
1994-01-01
The 1987 Resources Planning Act (RPA) Assessment was conducted to provide current information on the nation's forests. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or state. Statistics are provided in a metric format for international use.
Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P.
2014-01-01
Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419
Magnetic field feature extraction and selection for indoor location estimation.
Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F
2014-06-20
User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.
Numerical modeling of hydrodynamics and sediment transport—an integrated approach
NASA Astrophysics Data System (ADS)
Gic-Grusza, Gabriela; Dudkowska, Aleksandra
2017-10-01
Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind-wave fields, whereas wave-induced currents were calculated using the Kołodko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.
Towards a systematic approach to comparing distributions used in flood frequency analysis
NASA Astrophysics Data System (ADS)
Bobée, B.; Cavadias, G.; Ashkar, F.; Bernier, J.; Rasmussen, P.
1993-02-01
The estimation of flood quantiles from available streamflow records has been a topic of extensive research in this century. However, the large number of distributions and estimation methods proposed in the scientific literature has led to a state of confusion, and a gap prevails between theory and practice. This concerns both at-site and regional flood frequency estimation. To facilitate the work of "hydrologists, designers of hydraulic structures, irrigation engineers and planners of water resources", the World Meteorological Organization recently published a report which surveys and compares current methodologies, and recommends a number of statistical distributions and estimation procedures. This report is an important step towards the clarification of this difficult topic, but we think that it does not effectively satisfy the needs of practitioners as intended, because it contains some statements which are not statistically justified and which require further discussion. In the present paper we review commonly used procedures for flood frequency estimation, point out some of the reasons for the present state of confusion concerning the advantages and disadvantages of the various methods, and propose the broad lines of a possible comparison strategy. We recommend that the results of such comparisons be discussed in an international forum of experts, with the purpose of attaining a more coherent and broadly accepted strategy for estimating floods.
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Investigation of Keeper Erosion in the NSTAR Ion Thruster
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Foster, John E.; Patterson, Michael J.; Williams, George J., Jr.
2001-01-01
The goal of the present investigation was to determine the cause for the difference in the observed discharge keeper erosion between the 8200 hr wear test of a NASA Solar Electric Propulsion Technology Applications Readiness (NSTAR) engineering model thruster and the ongoing extended life test (ELT) of the NSTAR flight spare thruster. During the ELT, the NSTAR flight spare ion thruster experienced unanticipated erosion of the discharge cathode keeper. Photographs of the discharge keeper show that the orifice has enlarged to slightly more than twice the original diameter. Several differences between the ELT and the 8200 hr wear test were initially identified to determine any effects which could lead to the erosion in the ELT. In order to identify the cause of the ELT erosion, emission spectra from an engineering model thruster were collected to assess the dependence of keeper erosion on operating conditions. Keeper ion current was measured to estimate wear. Additionally, post-test inspection of both a copper keeper-cap was conducted, and the results are presented. The analysis indicated that the bulk of the ion current was collected within 2-mm radially of the orifice. The estimated volumetric wear in the ELT was comparable to previous wear tests. Redistribution of the ion current on the discharge keeper was determined to be the most likely cause of the ELT erosion. The change in ion current distribution was hypothesized to caused by the modified magnetic field of the flight assemblies.
Predicting Grizzly Bear Density in Western North America
Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552
Predicting grizzly bear density in western North America.
Mowat, Garth; Heard, Douglas C; Schwarz, Carl J
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.
Wolf-Rayet content of the Milky Way
NASA Astrophysics Data System (ADS)
Crowther, P. A.
An overview of the known Wolf-Rayet (WR) population of the Milky Way is presented, including a brief overview of historical catalogues and recent advances based on infrared photometric and spectroscopic observations resulting in the current census of 642 (vl.13 online catalogue). The observed distribution of WR stars is considered with respect to known star clusters, given that ≤20% of WR stars in the disk are located in clusters. WN stars outnumber WC stars at all galactocentric radii, while early-type WC stars are strongly biased against the inner Milky Way. Finally, recent estimates of the global WR population in the Milky Way are reassessed, with 1,200±100 estimated, such that the current census may be 50% complete. A characteristic WR lifetime of 0.25 Myr is inferred for an initial mass threshold of 25 M⊙.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Updating estimates of low streamflow statistics to account for possible trends
NASA Astrophysics Data System (ADS)
Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.
2017-12-01
Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.
Cost of chronic disease in California: estimates at the county level.
Brown, Paul M; Gonzalez, Mariaelena; Dhaul, Ritem Sandhu
2015-01-01
An estimated 39% of people in California suffer from at least one chronic condition or disease. While the increased coverage provided by the Affordable Care Act will result in greater access to primary health care, coordinated strategies are needed to prevent chronic conditions. To identify cost-effective strategies, local health departments and other agencies need accurate information on the costs of chronic conditions in their region. To present a methodology for estimating the cost of chronic conditions for counties. Estimates of the attributable cost of 6 chronic conditions-arthritis, asthma, cancer, cardiovascular disease, diabetes, and depression-from the Centers for Disease Control and Prevention's Chronic Disease Cost Calculator were combined with prevalence rates from the various sources and census data for California counties to estimate the number of cases and costs of each condition. The estimates were adjusted for differences in prices using Medicare geographical adjusters. An estimated $98 billion is currently spent on treating chronic conditions in California. There is significant variation between counties in the percentage of total health care expenditure due to chronic conditions and county size, ranging from a low 32% to a high of 63%. The variations between counties result from differing rates of chronic conditions across age, ethnicity, and gender. Information on the cost of chronic conditions is important for planning prevention and control efforts. This study demonstrates a method for providing local health departments with estimates of the scope of the problems in their region. Combining the cost estimates with information on current prevention strategies can identify gaps in prevention activities and the prevention measures that promise the greatest return on investment for each county.
Southern African ancient genomes estimate modern human divergence to 350,000 to 260,000 years ago.
Schlebusch, Carina M; Malmström, Helena; Günther, Torsten; Sjödin, Per; Coutinho, Alexandra; Edlund, Hanna; Munters, Arielle R; Vicente, Mário; Steyn, Maryna; Soodyall, Himla; Lombard, Marlize; Jakobsson, Mattias
2017-11-03
Southern Africa is consistently placed as a potential region for the evolution of Homo sapiens We present genome sequences, up to 13x coverage, from seven ancient individuals from KwaZulu-Natal, South Africa. The remains of three Stone Age hunter-gatherers (about 2000 years old) were genetically similar to current-day southern San groups, and those of four Iron Age farmers (300 to 500 years old) were genetically similar to present-day Bantu-language speakers. We estimate that all modern-day Khoe-San groups have been influenced by 9 to 30% genetic admixture from East Africans/Eurasians. Using traditional and new approaches, we estimate the first modern human population divergence time to between 350,000 and 260,000 years ago. This estimate increases the deepest divergence among modern humans, coinciding with anatomical developments of archaic humans into modern humans, as represented in the local fossil record. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Current Status of Chemical Public Health Risks and Testing ...
The cardiovascular system, at all its various developmental and life stages, represents a critical target organ system that can be adversely affected by a variety of chemicals and routes of exposure. A World Health Organization report estimated the impact of environmental chemical exposures on health to be 16% (range: 7—23%) of the total global burden of cardiovascular disease, corresponding to ~2.5 million deaths per year. Currently, the overall impact of environmental chemical exposures on all causes of cardiovascular disease and the number one cause of morbidity and mortality in the United States is unknown. Evidence from epidemiology, clinical, and toxicological studies will be presented documenting adverse cardiovascular effects associated with environmental exposure to chemicals. The presentation will cover US EPA’s ability to regulate and test chemicals as well as current challenges faced by the Agency to assess chemical cardiovascular risk and public health safety. (This abstract does not necessarily reflect US EPA Policy) Will be presented at the Workshop titled
Gravitational wave searches using the DSN (Deep Space Network)
NASA Technical Reports Server (NTRS)
Nelson, S. J.; Armstrong, J. W.
1988-01-01
The Deep Space Network Doppler spacecraft link is currently the only method available for broadband gravitational wave searches in the 0.01 to 0.001 Hz frequency range. The DSN's role in the worldwide search for gravitational waves is described by first summarizing from the literature current theoretical estimates of gravitational wave strengths and time scales from various astrophysical sources. Current and future detection schemes for ground based and space based detectors are then discussed. Past, present, and future planned or proposed gravitational wave experiments using DSN Doppler tracking are described. Lastly, some major technical challenges to improve gravitational wave sensitivities using the DSN are discussed.
Eddy current heating in magnetic refrigerators
NASA Technical Reports Server (NTRS)
Kittel, Peter
1990-01-01
Eddy current heating can be a significant source of parasitic heating in low temperature magnetic refrigerators. To study this problem a technique to approximate the heating due to eddy currents has been developed. A formula is presented for estimating the heating within a variety of shapes commonly found in magnetic refrigerators. These shapes include circular, square, and rectangular rods; cylindrical and split cylindrical shells; wire loops; and 'coil foil. One set of components evaluated are different types of thermal radiation shields. This comparison shows that a simple split shield is almost as effective (only 23 percent more heating) as using a shield, with the same axial thermal conductivity, made of 'coil foil'.
Estimating the Velocity and Transport of the East Australian Current using Argo, XBT, and Altimetry
NASA Astrophysics Data System (ADS)
Zilberman, N. V.; Roemmich, D. H.; Gille, S. T.
2016-02-01
Western Boundary Currents (WBCs) are the strongest ocean currents in the subtropics, and constitute the main pathway through which warm water-masses transit from low to mid-latitudes in the subtropical gyres of the Atlantic, Pacific, and Indian Oceans. Heat advection by WBCs has a significant impact on heat storage in subtropical mode waters formation regions and at high latitudes. The possibility that the magnitude of WBCs might change under greenhouse gas forcing has raised significant concerns. Improving our knowledge of WBC circulation is essential to accurately monitor the oceanic heat budget. Because of the narrowness and strong mesoscale variability of WBCs, estimation of WBC velocity and transport places heavy demands on any potential sampling scheme. One strategy for studying WBCs is to combine complementary data sources. High-resolution bathythermograph (HRX) profiles to 800-m have been collected along transects crossing the East Australian Current (EAC) system at 3-month nominal sampling intervals since 1991. EAC transects, with spatial sampling as fine as 10-15 km, are obtained off Brisbane (27°S) and Sydney (34°S), and crossing the related East Auckland Current north of Auckland. Here, HRX profiles collected since 2004 off Brisbane are merged with Argo float profiles and 1000 m trajectory-based velocities to expand HRX shear estimates to 2000-m and to estimate absolute geostrophic velocity and transport. A method for combining altimetric data with HRX and Argo profiles to mitigate temporal aliasing by the HRX transects and to reduce sampling errors in the HRX/Argo datasets is described. The HRX/Argo/altimetry-based estimate of the time-mean poleward alongshore transport of the EAC off Brisbane is 18.3 Sv, with a width of about 180 km, and of which 3.7 Sv recirculates equatorward on a similar spatial scale farther offshore. Geostrophic transport anomalies in the EAC at 27°S show variability of ± 1.3 Sv at interannual time scale related to ENSO. The present calculation is a case study that will be extended to other subtropical WBCs.
Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements
NASA Astrophysics Data System (ADS)
Jakub, Thomas D.
Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.
Disruption of crystalline structure of Sn3.5Ag induced by electric current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Han-Chie; Lin, Kwang-Lung, E-mail: matkllin@mail.ncku.edu.tw; Wu, Albert T.
2016-03-21
This study presented the disruption of the Sn and Ag{sub 3}Sn lattice structures of Sn3.5Ag solder induced by electric current at 5–7 × 10{sup 3} A/cm{sup 2} with a high resolution transmission electron microscope investigation and electron diffraction analysis. The electric current stressing induced a high degree of strain on the alloy, as estimated from the X-ray diffraction (XRD) peak shift of the current stressed specimen. The XRD peak intensity of the Sn matrix and the Ag{sub 3}Sn intermetallic compound diminished to nearly undetectable after 2 h of current stressing. The electric current stressing gave rise to a high dislocation density ofmore » up to 10{sup 17}/m{sup 2}. The grain morphology of the Sn matrix became invisible after prolonged current stressing as a result of the coalescence of dislocations.« less
Present status of astronomical constants
NASA Astrophysics Data System (ADS)
Fukushima, T.
Given was the additional information to the previous report on the recent progress in the determinations of astronomical constants (Fukushima 2000). First noted was the revision of LG as 6.969290134×10-10 based on the proposal to shift its status from a primary to a defining constant (Petit 2000). Next focused was the significant update of the correction to the current precession constant, Δp, based on the recent LLR-based determination (Chapront et al. 2000) as -0.3164+/-0.0030"/cy. By combining this and the equal weighted average of VLBI determinations (Mathews et al. 2000; Petrov 2000; Shirai and Fukushima 2000; Vondrak and Ron 2000) as -0.2968+/-0.0043"/cy, we derived the best estimate of precession constant as p = 5028.790+/-0.005"/cy. Also redetermined were some other quantities related to the precession formula; namely the offsets of Celestial Ephemeris Pole of the International Celestial Reference System as &Deltaψ0sinɛ0 = (-17.0+/-0.3) mas and Δɛ0 = (-5.1+/-0.3) mas. As a result, the obliquity of the ecliptic at the epoch J2000.0 was estimated as ɛ0 = 23°26'21."4059+/-0."0003. As a summary, presented was the (revised) IAU 2000 File of Current Best Estimates of astronomical constants, which is to replace the former 1994 version (Standish 1995).
NASA Astrophysics Data System (ADS)
Zakharov, A. F.; Jovanović, P.; Borka, D.; Borka Jovanović, V.
2018-04-01
Recently, the LIGO-Virgo collaboration discovered gravitational waves and in their first publication on the subject the authors also presented a graviton mass constraint as mg < 1.2 × 10‑22 eV [1] (see also more details in a complimentary paper [2]). In our previous papers we considered constraints on Yukawa gravity parameters [3] and on graviton mass from analysis of the trajectory of S2 star near the Galactic Center [4]. In the paper we analyze a potential to reduce upper bounds for graviton mass with future observational data on trajectories of bright stars near the Galactic Center. Since gravitational potentials are different for these two cases, expressions for relativistic advance for general relativity and Yukawa potential are different functions on eccentricity and semimajor axis, it gives an opportunity to improve current estimates of graviton mass with future observational facilities. In our considerations of an improvement potential for a graviton mass estimate we adopt a conservative strategy and assume that trajectories of bright stars and their apocenter advance will be described with general relativity expressions and it gives opportunities to improve graviton mass constraints. In contrast with our previous studies, where we present current constraints on parameters of Yukawa gravity [5] and graviton mass [6] from observations of S2 star, in the paper we express expectations to improve current constraints for graviton mass, assuming the GR predictions about apocenter shifts will be confirmed with future observations. We concluded that if future observations of bright star orbits during around fifty years will confirm GR predictions about apocenter shifts of bright star orbits it give an opportunity to constrain a graviton mass at a level around 5 × 10‑23 eV or slightly better than current estimates obtained with LIGO observations.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boden, T.A.; Marland, G.; Andres, R.J.
1995-12-01
This document describes the compilation, content, and format of the most comprehensive C0{sub 2}-emissions database currently available. The database includes global, regional, and national annual estimates of C0{sub 2} emissions resulting from fossil-fuel burning, cement manufacturing, and gas flaring in oil fields for 1950--92 as well as the energy production, consumption, and trade data used for these estimates. The methods of Marland and Rotty (1983) are used to calculate these emission estimates. For the first time, the methods and data used to calculate CO, emissions from gas flaring are presented. This C0{sub 2}-emissions database is useful for carbon-cycle research, providesmore » estimates of the rate at which fossil-fuel combustion has released C0{sub 2} to the atmosphere, and offers baseline estimates for those countries compiling 1990 C0{sub 2}-emissions inventories.« less
Contributions of past and present human generations to committed warming caused by carbon dioxide.
Friedlingstein, Pierre; Solomon, Susan
2005-08-02
We developed a highly simplified approach to estimate the contributions of the past and present human generations to the increase of atmospheric CO(2) and associated global average temperature increases. For each human generation of adopted 25-year length, we use simplified emission test cases to estimate the committed warming passed to successive children, grandchildren, and later generations. We estimate that the last and the current generation contributed approximately two thirds of the present-day CO(2)-induced warming. Because of the long time scale required for removal of CO(2) from the atmosphere as well as the time delays characteristic of physical responses of the climate system, global mean temperatures are expected to increase by several tenths of a degree for at least the next 20 years even if CO(2) emissions were immediately cut to zero; that is, there is a commitment to additional CO(2)-induced warming even in the absence of emissions. If the rate of increase of CO(2) emissions were to continue up to 2025 and then were cut to zero, a temperature increase of approximately 1.3 degrees C compared to preindustrial conditions would still occur in 2100, whereas a constant-CO(2)-emissions scenario after 2025 would more than double the 2100 warming. These calculations illustrate the manner in which each generation inherits substantial climate change caused by CO(2) emissions that occurred previously, particularly those of their parents, and shows that current CO(2) emissions will contribute significantly to the climate change of future generations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostinetti, P., E-mail: piero.agostinetti@igi.cnr.it; Serianni, G.; Veltri, P.
The Radio Frequency (RF) negative hydrogen ion source prototype has been chosen for the ITER neutral beam injectors due to its optimal performances and easier maintenance demonstrated at Max-Planck-Institut für Plasmaphysik, Garching in hydrogen and deuterium. One of the key information to better understand the operating behavior of the RF ion sources is the extracted negative ion current density distribution. This distribution—influenced by several factors like source geometry, particle drifts inside the source, cesium distribution, and layout of cesium ovens—is not straightforward to be evaluated. The main outcome of the present contribution is the development of a minimization method tomore » estimate the extracted current distribution using the footprint of the beam recorded with mini-STRIKE (Short-Time Retractable Instrumented Kalorimeter). To accomplish this, a series of four computational models have been set up, where the output of a model is the input of the following one. These models compute the optics of the ion beam, evaluate the distribution of the heat deposited on the mini-STRIKE diagnostic calorimeter, and finally give an estimate of the temperature distribution on the back of mini-STRIKE. Several iterations with different extracted current profiles are necessary to give an estimate of the profile most compatible with the experimental data. A first test of the application of the method to the BAvarian Test Machine for Negative ions beam is given.« less
Zorgani, Youssef Agrebi; Koubaa, Yassine; Boussak, Mohamed
2016-03-01
This paper presents a novel method for estimating the load torque of a sensorless indirect stator flux oriented controlled (ISFOC) induction motor drive based on the model reference adaptive system (MRAS) scheme. As a matter of fact, this method is meant to inter-connect a speed estimator with the load torque observer. For this purpose, a MRAS has been applied to estimate the rotor speed with tuned load torque in order to obtain a high performance ISFOC induction motor drive. The reference and adjustable models, developed in the stationary stator reference frame, are used in the MRAS scheme in an attempt to estimate the speed of the measured terminal voltages and currents. The load torque is estimated by means of a Luenberger observer defined throughout the mechanical equation. Every observer state matrix depends on the mechanical characteristics of the machine taking into account the vicious friction coefficient and inertia moment. Accordingly, some simulation results are presented to validate the proposed method and to highlight the influence of the variation of the inertia moment and the friction coefficient on the speed and the estimated load torque. The experimental results, concerning to the sensorless speed with a load torque estimation, are elaborated in order to validate the effectiveness of the proposed method. The complete sensorless ISFOC with load torque estimation is successfully implemented in real time using a digital signal processor board DSpace DS1104 for a laboratory 3 kW induction motor. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Machine Learning Based Diagnosis of Lithium Batteries
NASA Astrophysics Data System (ADS)
Ibe-Ekeocha, Chinemerem Christopher
The depletion of the world's current petroleum reserve, coupled with the negative effects of carbon monoxide and other harmful petrochemical by-products on the environment, is the driving force behind the movement towards renewable and sustainable energy sources. Furthermore, the growing transportation sector consumes a significant portion of the total energy used in the United States. A complete electrification of this sector would require a significant development in electric vehicles (EVs) and hybrid electric vehicles (HEVs), thus translating to a reduction in the carbon footprint. As the market for EVs and HEVs grows, their battery management systems (BMS) need to be improved accordingly. The BMS is not only responsible for optimally charging and discharging the battery, but also monitoring battery's state of charge (SOC) and state of health (SOH). SOC, similar to an energy gauge, is a representation of a battery's remaining charge level as a percentage of its total possible charge at full capacity. Similarly, SOH is a measure of deterioration of a battery; thus it is a representation of the battery's age. Both SOC and SOH are not measurable, so it is important that these quantities are estimated accurately. An inaccurate estimation could not only be inconvenient for EV consumers, but also potentially detrimental to battery's performance and life. Such estimations could be implemented either online, while battery is in use, or offline when battery is at rest. This thesis presents intelligent online SOC and SOH estimation methods using machine learning tools such as artificial neural network (ANN). ANNs are a powerful generalization tool if programmed and trained effectively. Unlike other estimation strategies, the techniques used require no battery modeling or knowledge of battery internal parameters but rather uses battery's voltage, charge/discharge current, and ambient temperature measurements to accurately estimate battery's SOC and SOH. The developed algorithms are evaluated experimentally using two different batteries namely lithium iron phosphate (LiFePO 4) and lithium titanate (LTO), both subjected to constant and dynamic current profiles. Results highlight the robustness of these algorithms to battery's nonlinear dynamic nature, hysteresis, aging, dynamic current profile, and parametric uncertainties. Consequently, these methods are susceptible and effective if incorporated with the BMS of EVs', HEVs', and other battery powered devices.
Electron-neutrino charged-current quasi-elastic scattering in MINERvA
NASA Astrophysics Data System (ADS)
Wolcott, Jeremy
2014-03-01
The electron-neutrino charged-current quasi-elastic (CCQE) cross-section on nuclei is an important input parameter to appearance-type neutrino oscillation experiments. Current experiments typically work from the muon neutrino CCQE cross-section and apply corrections from theoretical arguments to obtain a prediction for the electron neutrino CCQE cross-section, but to date there has been no precise experimental verification of these estimates at an energy scale appropriate to such experiments. We present the current status of a direct measurement of the electron neutrino CCQE differential cross-section as a function of the squared four-momentum transfer to the nucleus, Q2, in MINERvA. This talk will discuss event selection, background constraints, and the flux prediction used in the calculation.
Numerical modelling of electromagnetic loads on fusion device structures
NASA Astrophysics Data System (ADS)
Bettini, Paolo; Furno Palumbo, Maurizio; Specogna, Ruben
2014-03-01
In magnetic confinement fusion devices, during abnormal operations (disruptions) the plasma begins to move rapidly towards the vessel wall in a vertical displacement event (VDE), producing plasma current asymmetries, vessel eddy currents and open field line halo currents, each of which can exert potentially damaging forces upon the vessel and in-vessel components. This paper presents a methodology to estimate electromagnetic loads, on three-dimensional conductive structures surrounding the plasma, which arise from the interaction of halo-currents associated to VDEs with a magnetic field of the order of some Tesla needed for plasma confinement. Lorentz forces, calculated by complementary formulations, are used as constraining loads in a linear static structural analysis carried out on a detailed model of the mechanical structures of a representative machine.
Estimating turbidity current conditions from channel morphology: A Froude number approach
NASA Astrophysics Data System (ADS)
Sequeiros, Octavio E.
2012-04-01
There is a growing need across different disciplines to develop better predictive tools for flow conditions of density and turbidity currents. Apart from resorting to complex numerical modeling or expensive field measurements, little is known about how to estimate gravity flow parameters from scarce available data and how they relate to each other. This study presents a new method to estimate normal flow conditions of gravity flows from channel morphology based on an extensive data set of laboratory and field measurements. The compilation consists of 78 published works containing 1092 combined measurements of velocity and concentration of gravity flows dating as far back as the early 1950s. Because the available data do not span all ranges of the critical parameters, such as bottom slope, a validated Reynolds-averaged Navier-Stokes (RANS)κ-ɛnumerical model is used to cover the gaps. It is shown that gravity flows fall within a range of Froude numbers spanning 1 order of magnitude centered on unity, as opposed to rivers and open-channel flows which extend to a much wider range. It is also observed that the transition from subcritical to supercritical flow regime occurs around a slope of 1%, with a spread caused by parameters other than the bed slope, like friction and suspended sediment settling velocity. The method is based on a set of equations relating Froude number to bed slope, combined friction, suspended material, and other flow parameters. The applications range from quick estimations of gravity flow conditions to improved numerical modeling and back calculation of missing parameters. A real case scenario of turbidity current estimation from a submarine canyon off the Nigerian coast is provided as an example.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Satellite Power Systems (SPS) space transportation cost analysis and evaluation
NASA Technical Reports Server (NTRS)
1980-01-01
A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.
Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft
NASA Technical Reports Server (NTRS)
Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish
2017-01-01
In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.
Nitrogen Loading in Jamaica Bay, Long Island, New York: Predevelopment to 2005
Benotti, Mark J.; Abbene, Irene; Terracciano, Stephen A.
2007-01-01
Nitrogen loading to Jamaica Bay, a highly urbanized estuary on the southern shore of western Long Island, New York, has increased from an estimated rate of 35.6 kilograms per day (kg/d) under predevelopment conditions (pre-1900), chiefly as nitrate plus nitrite from ground-water inflow, to an estimated 15,800 kilograms per day as total nitrogen in 2005. The principal point sources are wastewater-treatment plants, combined sewer overflow/stormwater discharge during heavy precipitation, and subway dewatering, which account for 92 percent of the current (2005) nitrogen load. The principal nonpoint sources are landfill leachate, ground-water flow, and atmospheric deposition, which account for 8 percent of the current nitrogen load. The largest single source of nitrogen to Jamaica Bay is wastewater-treatment plants, which account for 89 percent of the nitrogen load. The current and historic contributions of nitrogen from seawater are unknown, although at present, the ocean likely serves as a sink for nitrogen from Jamaica Bay. Currently, concentrations of nitrogen in surface water are high throughout Jamaica Bay, but some areas with relatively little mixing have concentrations that are five times higher than areas that are well mixed.
NASA Astrophysics Data System (ADS)
Archip, Neculai; Fedorov, Andriy; Lloyd, Bryn; Chrisochoides, Nikos; Golby, Alexandra; Black, Peter M.; Warfield, Simon K.
2006-03-01
A major challenge in neurosurgery oncology is to achieve maximal tumor removal while avoiding postoperative neurological deficits. Therefore, estimation of the brain deformation during the image guided tumor resection process is necessary. While anatomic MRI is highly sensitive for intracranial pathology, its specificity is limited. Different pathologies may have a very similar appearance on anatomic MRI. Moreover, since fMRI and diffusion tensor imaging are not currently available during the surgery, non-rigid registration of preoperative MR with intra-operative MR is necessary. This article presents a translational research effort that aims to integrate a number of state-of-the-art technologies for MRI-guided neurosurgery at the Brigham and Women's Hospital (BWH). Our ultimate goal is to routinely provide the neurosurgeons with accurate information about brain deformation during the surgery. The current system is tested during the weekly neurosurgeries in the open magnet at the BWH. The preoperative data is processed, prior to the surgery, while both rigid and non-rigid registration algorithms are run in the vicinity of the operating room. The system is tested on 9 image datasets from 3 neurosurgery cases. A method based on edge detection is used to quantitatively validate the results. 95% Hausdorff distance between points of the edges is used to estimate the accuracy of the registration. Overall, the minimum error is 1.4 mm, the mean error 2.23 mm, and the maximum error 3.1 mm. The mean ratio between brain deformation estimation and rigid alignment is 2.07. It demonstrates that our results can be 2.07 times more precise then the current technology. The major contribution of the presented work is the rigid and non-rigid alignment of the pre-operative fMRI with intra-operative 0.5T MRI achieved during the neurosurgery.
Effects of the Ionosphere on Passive Microwave Remote Sensing of Ocean Salinity from Space
NASA Technical Reports Server (NTRS)
LeVine, D. M.; Abaham, Saji; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Among the remote sensing applications currently being considered from space is the measurement of sea surface salinity. The salinity of the open ocean is important for understanding ocean circulation and for modeling energy exchange with the atmosphere. Passive microwave remote sensors operating near 1.4 GHz (L-band) could provide data needed to fill the gap in current coverage and to complement in situ arrays being planned to provide subsurface profiles in the future. However, the dynamic range of the salinity signal in the open ocean is relatively small and propagation effects along the path from surface to sensor must be taken into account. In particular, Faraday rotation and even attenuation/emission in the ionosphere can be important sources of error. The purpose or this work is to estimate the magnitude of these effects in the context of a future remote sensing system in space to measure salinity in L-band. Data will be presented as a function of time location and solar activity using IRI-95 to model the ionosphere. The ionosphere presents two potential sources of error for the measurement of salinity: Rotation of the polarization vector (Faraday rotation) and attenuation/emission. Estimates of the effect of these two phenomena on passive remote sensing over the oceans at L-band (1.4 GHz) are presented.
NASA Technical Reports Server (NTRS)
Haggerty, Julie; McDonough, Frank; Black, Jennifer; Landott, Scott; Wolff, Cory; Mueller, Steven; Minnis, Patrick; Smith, William, Jr.
2008-01-01
Operational products used by the U.S. Federal Aviation Administration to alert pilots of hazardous icing provide nowcast and short-term forecast estimates of the potential for the presence of supercooled liquid water and supercooled large droplets. The Current Icing Product (CIP) system employs basic satellite-derived information, including a cloud mask and cloud top temperature estimates, together with multiple other data sources to produce a gridded, three-dimensional, hourly depiction of icing probability and severity. Advanced satellite-derived cloud products developed at the NASA Langley Research Center (LaRC) provide a more detailed description of cloud properties (primarily at cloud top) compared to the basic satellite-derived information used currently in CIP. Cloud hydrometeor phase, liquid water path, cloud effective temperature, and cloud top height as estimated by the LaRC algorithms are into the CIP fuzzy logic scheme and a confidence value is determined. Examples of CIP products before and after the integration of the LaRC satellite-derived products will be presented at the conference.
Learning to select useful landmarks.
Greiner, R; Isukapalli, R
1996-01-01
To navigate effectively, an autonomous agent must be able to quickly and accurately determine its current location. Given an initial estimate of its position (perhaps based on dead-reckoning) and an image taken of a known environment, our agent first attempts to locate a set of landmarks (real-world objects at known locations), then uses their angular separation to obtain an improved estimate of its current position. Unfortunately, some landmarks may not be visible, or worse, may be confused with other landmarks, resulting in both time wasted in searching for the undetected landmarks, and in further errors in the agent's estimate of its position. To address these problems, we propose a method that uses previous experiences to learn a selection function that, given the set of landmarks that might be visible, returns the subset that can be used to reliably provide an accurate registration of the agent's position. We use statistical techniques to prove that the learned selection function is, with high probability, effectively at a local optimum in the space of such functions. This paper also presents empirical evidence, using real-world data, that demonstrate the effectiveness of our approach.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Interstellar Travel without 'Magic'
NASA Astrophysics Data System (ADS)
Woodcock, G.
The possibility of interstellar space travel has become a popular subject. Distances of light years are an entirely new realm for human space travel. New means of propulsion are needed. Speculation about propulsion has included "magic", space warps, faster-than-light travel, known physics such as antimatter for which no practical implementation is known and also physics for which current research offers at least a hint of implementation, i.e. fusion. Performance estimates are presented for the latter and used to create vehicle concepts. Fusion propulsion will mean travel times of hundreds of years, so we adopt the "space colony" concepts of O'Neill as a ship design that could support a small civilization indefinitely; this provides the technical means. Economic reasoning is presented, arguing that development and production of "space colony" habitats for relief of Earth's population, with addition of fusion engines, will lead to vessels that can go interstellar. Scenarios are presented and a speculative estimate of a timetable is given.
A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.
Kim, Ji Chul
2017-01-01
Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.
Gyrotron-driven high current ECR ion source for boron-neutron capture therapy neutron generator
NASA Astrophysics Data System (ADS)
Skalyga, V.; Izotov, I.; Golubev, S.; Razin, S.; Sidorov, A.; Maslennikova, A.; Volovecky, A.; Kalvas, T.; Koivisto, H.; Tarvainen, O.
2014-12-01
Boron-neutron capture therapy (BNCT) is a perspective treatment method for radiation resistant tumors. Unfortunately its development is strongly held back by a several physical and medical problems. Neutron sources for BNCT currently are limited to nuclear reactors and accelerators. For wide spread of BNCT investigations more compact and cheap neutron source would be much more preferable. In present paper an approach for compact D-D neutron generator creation based on a high current ECR ion source is suggested. Results on dense proton beams production are presented. A possibility of ion beams formation with current density up to 600 mA/cm2 is demonstrated. Estimations based on obtained experimental results show that neutron target bombarded by such deuteron beams would theoretically yield a neutron flux density up to 6·1010 cm-2/s. Thus, neutron generator based on a high-current deuteron ECR source with a powerful plasma heating by gyrotron radiation could fulfill the BNCT requirements significantly lower price, smaller size and ease of operation in comparison with existing reactors and accelerators.
Lee, Hyunyeol; Sohn, Chul-Ho; Park, Jaeseok
2017-07-01
To develop a current-induced, alternating reversed dual-echo-steady-state-based magnetic resonance electrical impedance tomography for joint estimation of tissue relaxation and electrical properties. The proposed method reverses the readout gradient configuration of conventional, in which steady-state-free-precession (SSFP)-ECHO is produced earlier than SSFP-free-induction-decay (FID) while alternating current pulses are applied in between the two SSFPs to secure high sensitivity of SSFP-FID to injection current. Additionally, alternating reversed dual-echo-steady-state signals are modulated by employing variable flip angles over two orthogonal injections of current pulses. Ratiometric signal models are analytically constructed, from which T 1 , T 2 , and current-induced B z are jointly estimated by solving a nonlinear inverse problem for conductivity reconstruction. Numerical simulations and experimental studies are performed to investigate the feasibility of the proposed method in estimating relaxation parameters and conductivity. The proposed method, if compared with conventional magnetic resonance electrical impedance tomography, enables rapid data acquisition and simultaneous estimation of T 1 , T 2 , and current-induced B z , yielding a comparable level of signal-to-noise ratio in the parameter estimates while retaining a relative conductivity contrast. We successfully demonstrated the feasibility of the proposed method in jointly estimating tissue relaxation parameters as well as conductivity distributions. It can be a promising, rapid imaging strategy for quantitative conductivity estimation. Magn Reson Med 78:107-120, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Edwards, Mervyn; Nathanson, Andrew; Wisch, Marcus
2014-01-01
The objective of the current study was to estimate the benefit for Europe of fitting precrash braking systems to cars that detect pedestrians and autonomously brake the car to prevent or lower the speed of the impact with the pedestrian. The analysis was divided into 2 main parts: (1) Develop and apply methodology to estimate benefit for Great Britain and Germany; (2) scale Great Britain and German results to give an indicative estimate for Europe (EU27). The calculation methodology developed to estimate the benefit was based on 2 main steps: 1. Calculate the change in the impact speed distribution curve for pedestrian casualties hit by the fronts of cars assuming pedestrian autonomous emergency braking (AEB) system fitment. 2. From this, calculate the change in the number of fatally, seriously, and slightly injured casualties by using the relationship between risk of injury and the casualty impact speed distribution to sum the resulting risks for each individual casualty. The methodology was applied to Great Britain and German data for 3 types of pedestrian AEB systems representative of (1) currently available systems; (2) future systems with improved performance, which are expected to be available in the next 2-3 years; and (3) reference limit system, which has the best performance currently thought to be technically feasible. Nominal benefits estimated for Great Britain ranged from £119 million to £385 million annually and for Germany from €63 million to €216 million annually depending on the type of AEB system assumed fitted. Sensitivity calculations showed that the benefit estimated could vary from about half to twice the nominal estimate, depending on factors such as whether or not the system would function at night and the road friction assumed. Based on scaling of estimates made for Great Britain and Germany, the nominal benefit of implementing pedestrian AEB systems on all cars in Europe was estimated to range from about €1 billion per year for current generation AEB systems to about €3.5 billion for a reference limit system (i.e., best performance thought technically feasible at present). Dividing these values by the number of new passenger cars registered in Europe per year gives an indication that the cost of a system per car should be less than ∼€80 to ∼€280 for it to be cost effective. The potential benefit of fitting AEB systems to cars in Europe for pedestrian protection has been estimated and the results interpreted to indicate the upper limit of cost for a system to allow it to be cost effective.
Electrodynamics panel presentation
NASA Technical Reports Server (NTRS)
Mccoy, J.
1986-01-01
The Plasma Motor Generator (PMG) concept is explained in detail. The PMG tether systems being used to calculate the estimated performance data is described. The voltage drops and current contact geometries involved in the operation of an electrodynamic tether are displayed illustrating the comparative behavior of hollow cathodes, electron guns, and passive collectors for current coupling into the ionosphere. The basic PMG design involving the massive tether cable with little or no satellite mass at the far end(s) are also described. The Jupiter mission and its use of electrodynamic tethers are given. The need for demonstration experiments is stressed.
FY11 Facility Assessment Study for Aeronautics Test Program
NASA Technical Reports Server (NTRS)
Loboda, John A.; Sydnor, George H.
2013-01-01
This paper presents the approach and results for the Aeronautics Test Program (ATP) FY11 Facility Assessment Project. ATP commissioned assessments in FY07 and FY11 to aid in the understanding of the current condition and reliability of its facilities and their ability to meet current and future (five year horizon) test requirements. The principle output of the assessment was a database of facility unique, prioritized investments projects with budgetary cost estimates. This database was also used to identify trends for the condition of facility systems.
Estimation of Lightning Levels on a Launcher Using a BEM-Compressed Model
NASA Astrophysics Data System (ADS)
Silly, J.; Chaigne, B.; Aspas-Puertolas, J.; Herlem, Y.
2016-05-01
As development cycles in the space industry are being considerably reduced, it seems mandatory to deploy in parallel fast analysis methods for engineering purposes, but without sacrificing accuracy. In this paper we present the application of such methods to early Phase A-B [1] evaluation of lightning constraints on a launch vehicle.A complete 3D parametric model of a launcher has been thus developed and simulated with a Boundary Element Method (BEM)-frequency simulator (equipped with a low frequency algorithm). The time domain values of the observed currents and fields are obtained by post-treatment using an inverse discrete Fourier transform (IDFT).This model is used for lightning studies, especially the simulation are useful to analyse the influence of lightning injected currents on resulting circulated currents on external cable raceways. The description of the model and some of those results are presented in this article.
Long-term financing needs for HIV control in sub-Saharan Africa in 2015-2050: a modelling study.
Atun, Rifat; Chang, Angela Y; Ogbuoji, Osondu; Silva, Sachin; Resch, Stephen; Hontelez, Jan; Bärnighausen, Till
2016-03-06
To estimate the present value of current and future funding needed for HIV treatment and prevention in 9 sub-Saharan African (SSA) countries that account for 70% of HIV burden in Africa under different scenarios of intervention scale-up. To analyse the gaps between current expenditures and funding obligation, and discuss the policy implications of future financing needs. We used the Goals module from Spectrum, and applied the most up-to-date cost and coverage data to provide a range of estimates for future financing obligations. The four different scale-up scenarios vary by treatment initiation threshold and service coverage level. We compared the model projections to current domestic and international financial sources available in selected SSA countries. In the 9 SSA countries, the estimated resources required for HIV prevention and treatment in 2015-2050 range from US$98 billion to maintain current coverage levels for treatment and prevention with eligibility for treatment initiation at CD4 count of <500/mm(3) to US$261 billion if treatment were to be extended to all HIV-positive individuals and prevention scaled up. With the addition of new funding obligations for HIV--which arise implicitly through commitment to achieve higher than current treatment coverage levels--overall financial obligations (sum of debt levels and the present value of the stock of future HIV funding obligations) would rise substantially. Investing upfront in scale-up of HIV services to achieve high coverage levels will reduce HIV incidence, prevention and future treatment expenditures by realising long-term preventive effects of ART to reduce HIV transmission. Future obligations are too substantial for most SSA countries to be met from domestic sources alone. New sources of funding, in addition to domestic sources, include innovative financing. Debt sustainability for sustained HIV response is an urgent imperative for affected countries and donors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Adult Current Smoking: Differences in Definitions and Prevalence Estimates—NHIS and NSDUH, 2008
Ryan, Heather; Trosclair, Angela; Gfroerer, Joe
2012-01-01
Objectives. To compare prevalence estimates and assess issues related to the measurement of adult cigarette smoking in the National Health Interview Survey (NHIS) and the National Survey on Drug Use and Health (NSDUH). Methods. 2008 data on current cigarette smoking and current daily cigarette smoking among adults ≥18 years were compared. The standard NHIS current smoking definition, which screens for lifetime smoking ≥100 cigarettes, was used. For NSDUH, both the standard current smoking definition, which does not screen, and a modified definition applying the NHIS current smoking definition (i.e., with screen) were used. Results. NSDUH consistently yielded higher current cigarette smoking estimates than NHIS and lower daily smoking estimates. However, with use of the modified NSDUH current smoking definition, a notable number of subpopulation estimates became comparable between surveys. Younger adults and racial/ethnic minorities were most impacted by the lifetime smoking screen, with Hispanics being the most sensitive to differences in smoking variable definitions among all subgroups. Conclusions. Differences in current cigarette smoking definitions appear to have a greater impact on smoking estimates in some sub-populations than others. Survey mode differences may also limit intersurvey comparisons and trend analyses. Investigators are cautioned to use data most appropriate for their specific research questions. PMID:22649464
NASA Technical Reports Server (NTRS)
Lee, T.; Boland, D. F., Jr.
1980-01-01
This document presents the results of an extensive survey and comparative evaluation of current atmosphere and wind models for inclusion in the Langley Atmospheric Information Retrieval System (LAIRS). It includes recommended models for use in LAIRS, estimated accuracies for the recommended models, and functional specifications for the development of LAIRS.
Persons of Spanish Origin in the United States: March 1985 (Advance Report).
ERIC Educational Resources Information Center
Current Population Reports, 1985
1985-01-01
This brief report presents preliminary data on the demographic, social, and economic characteristics of people of Spanish origin in the United States. The data were collected by the Census Bureau in a supplement to the March 1985 Current Population Survey (CPS), which used independent postcensal estimates on Hispanics. The Hispanic population has…
Cargo/Logistics Airlift System Study (CLASS), Executive Summary
NASA Technical Reports Server (NTRS)
Norman, J. M.; Henderson, R. D.; Macey, F. C.; Tuttle, R. P.
1978-01-01
The current air cargo system is analyzed along with advanced air cargo systems studies. A forecast of advanced air cargo system demand is presented with cost estimates. It is concluded that there is a need for a dedicated advance air cargo system, and with application of advanced technology, reductions of 45% in air freight rates may be achieved.
The toxic equivalency factor (TEF) approach has been widely accepted as the most feasible and plausible method presently available for evaluating potential health risks associated with exposure to mixtures of polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofuran...
Coherent Power Analysis in Multi-Level Studies Using Design Parameters from Surveys
ERIC Educational Resources Information Center
Rhoads, Christopher
2016-01-01
Current practice for conducting power analyses in hierarchical trials using survey based ICC and effect size estimates may be misestimating power because ICCs are not being adjusted to account for treatment effect heterogeneity. Results presented in Table 1 show that the necessary adjustments can be quite large or quite small. Furthermore, power…
Design of prototype charged particle fog dispersal unit
NASA Technical Reports Server (NTRS)
Collins, F. G.; Frost, W.; Kessel, P.
1981-01-01
The unit was designed to be easily modified so that certain features that influence the output current and particle size distribution could be examined. An experimental program was designed to measure the performance of the unit. The program described includes measurements in a fog chamber and in the field. Features of the nozzle and estimated nozzle characteristics are presented.
Galinsky, Vitaly L; Martinez, Antigona; Paulus, Martin P; Frank, Lawrence R
2018-04-13
In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states.
Ha, Min-Jae
2018-01-01
This study presents a regional oil spill risk assessment and capacities for marine oil spill response in Korea. The risk assessment of oil spill is carried out using both causal factors and environmental/economic factors. The weight of each parameter is calculated using the Analytic Hierarchy Process (AHP). Final regional risk degrees of oil spill are estimated by combining the degree and weight of each existing parameter. From these estimated risk levels, oil recovery capacities were determined with reference to the recovery target of 7500kl specified in existing standards. The estimates were deemed feasible, and provided a more balanced distribution of resources than existing capacities set according to current standards. Copyright © 2017 Elsevier Ltd. All rights reserved.
An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC
NASA Astrophysics Data System (ADS)
Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng
2017-04-01
This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Transcriptome-derived stromal and immune scores infer clinical outcomes of patients with cancer.
Liu, Wei; Ye, Hua; Liu, Ying-Fu; Xu, Chao-Qun; Zhong, Yue-Xian; Tian, Tian; Ma, Shi-Wei; Tao, Huan; Li, Ling; Xue, Li-Chun; He, Hua-Qin
2018-04-01
The stromal and immune cells that form the tumor microenvironment serve a key role in the aggressiveness of tumors. Current tumor-centric interpretations of cancer transcriptome data ignore the roles of stromal and immune cells. The aim of the present study was to investigate the clinical utility of stromal and immune cells in tissue-based transcriptome data. The 'Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data' (ESTIMATE) algorithm was used to probe diverse cancer datasets and the fraction of stromal and immune cells in tumor tissues was scored. The association between the ESTIMATE scores and patient survival data was asessed; it was indicated that the two scores have implications for patient survival, metastasis and recurrence. Analysis of a colorectal cancer progression dataset revealed that decreased levels immune cells could serve an important role in cancer progression. The results of the present study indicated that trasncriptome-derived stromal and immune scores may be a useful indicator of cancer prognosis.
Slip-based terrain estimation with a skid-steer vehicle
NASA Astrophysics Data System (ADS)
Reina, Giulio; Galati, Rocco
2016-10-01
In this paper, a novel approach for online terrain characterisation is presented using a skid-steer vehicle. In the context of this research, terrain characterisation refers to the estimation of physical parameters that affects the terrain ability to support vehicular motion. These parameters are inferred from the modelling of the kinematic and dynamic behaviour of a skid-steer vehicle that reveals the underlying relationships governing the vehicle-terrain interaction. The concept of slip track is introduced as a measure of the slippage experienced by the vehicle during turning motion. The proposed terrain estimation system includes common onboard sensors, that is, wheel encoders, electrical current sensors and yaw rate gyroscope. Using these components, the system can characterise terrain online during normal vehicle operations. Experimental results obtained from different surfaces are presented to validate the system in the field showing its effectiveness and potential benefits to implement adaptive driving assistance systems or to automatically update the parameters of onboard control and planning algorithms.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
NASA Astrophysics Data System (ADS)
Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.
2010-12-01
The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.
Estimating lifetime and age-conditional probabilities of developing cancer.
Wun, L M; Merrill, R M; Feuer, E J
1998-01-01
Lifetime and age-conditional risk estimates of developing cancer provide a useful summary to the public of the current cancer risk and how this risk compares with earlier periods and among select subgroups of society. These reported estimates, commonly quoted in the popular press, have the potential to promote early detection efforts, to increase cancer awareness, and to serve as an aid in study planning. However, they can also be easily misunderstood and frightening to the general public. The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute and the American Cancer Society have recently begun including in annual reports lifetime and age-conditional risk estimates of developing cancer. These risk estimates are based on incidence rates that reflect new cases of the cancer in a population free of the cancer. To compute these estimates involves a cancer prevalence adjustment that is computed cross-sectionally from current incidence and mortality data derived within a multiple decrement life table. This paper presents a detailed description of the methodology for deriving lifetime and age-conditional risk estimates of developing cancer. In addition, an extension is made which, using a triple decrement life table, adjusts for a surgical procedure that removes individuals from the risk of developing a given cancer. Two important results which provide insights into the basic methodology are included in the discussion. First, the lifetime risk estimate does not depend on the cancer prevalence adjustment, although this is not the case for age-conditional risk estimates. Second, the lifetime risk estimate is always smaller when it is corrected for a surgical procedure that takes people out of the risk pool to develop the cancer. The methodology is applied to corpus and uterus NOS cancers, with a correction made for hysterectomy prevalence. The interpretation and limitations of risk estimates are also discussed.
Some analytical models to estimate maternal age at birth using age-specific fertility rates.
Pandey, A; Suchindran, C M
1995-01-01
"A class of analytical models to study the distribution of maternal age at different births from the data on age-specific fertility rates has been presented. Deriving the distributions and means of maternal age at birth of any specific order, final parity and at next-to-last birth, we have extended the approach to estimate parity progression ratios and the ultimate parity distribution of women in the population.... We illustrate computations of various components of the model expressions with the current fertility experiences of the United States for 1970." excerpt
Improving rotorcraft survivability to RPG attack using inverse methods
NASA Astrophysics Data System (ADS)
Anderson, D.; Thomson, D. G.
2009-09-01
This paper presents the results of a preliminary investigation of optimal threat evasion strategies for improving the survivability of rotorcraft under attack by rocket propelled grenades (RPGs). The basis of this approach is the application of inverse simulation techniques pioneered for simulation of aggressive helicopter manoeuvres to the RPG engagement problem. In this research, improvements in survivability are achieved by computing effective evasive manoeuvres. The first step in this process uses the missile approach warning system camera (MAWS) on the aircraft to provide angular information of the threat. Estimates of the RPG trajectory and impact point are then estimated. For the current flight state an appropriate evasion response is selected then realised via inverse simulation of the platform dynamics. Results are presented for several representative engagements showing the efficacy of the approach.
Relating magnetic reconnection to coronal heating
Longcope, D. W.; Tarr, L. A.
2015-01-01
It is clear that the solar corona is being heated and that coronal magnetic fields undergo reconnection all the time. Here we attempt to show that these two facts are related—i.e. coronal reconnection generates heat. This attempt must address the fact that topological change of field lines does not automatically generate heat. We present one case of flux emergence where we have measured the rate of coronal magnetic reconnection and the rate of energy dissipation in the corona. The ratio of these two, , is a current comparable to the amount of current expected to flow along the boundary separating the emerged flux from the pre-existing flux overlying it. We can generalize this relation to the overall corona in quiet Sun or in active regions. Doing so yields estimates for the contribution to coronal heating from magnetic reconnection. These estimated rates are comparable to the amount required to maintain the corona at its observed temperature. PMID:25897089
Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.
Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E
2017-04-04
The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.
Wang, Jianren; Xu, Junkai; Shull, Peter B
2018-03-01
Vertical jump height is widely used for assessing motor development, functional ability, and motor capacity. Traditional methods for estimating vertical jump height rely on force plates or optical marker-based motion capture systems limiting assessment to people with access to specialized laboratories. Current wearable designs need to be attached to the skin or strapped to an appendage which can potentially be uncomfortable and inconvenient to use. This paper presents a novel algorithm for estimating vertical jump height based on foot-worn inertial sensors. Twenty healthy subjects performed countermovement jumping trials and maximum jump height was determined via inertial sensors located above the toe and under the heel and was compared with the gold standard maximum jump height estimation via optical marker-based motion capture. Average vertical jump height estimation errors from inertial sensing at the toe and heel were -2.2±2.1 cm and -0.4±3.8 cm, respectively. Vertical jump height estimation with the presented algorithm via inertial sensing showed excellent reliability at the toe (ICC(2,1)=0.98) and heel (ICC(2,1)=0.97). There was no significant bias in the inertial sensing at the toe, but proportional bias (b=1.22) and fixed bias (a=-10.23cm) were detected in inertial sensing at the heel. These results indicate that the presented algorithm could be applied to foot-worn inertial sensors to estimate maximum jump height enabling assessment outside of traditional laboratory settings, and to avoid bias errors, the toe may be a more suitable location for inertial sensor placement than the heel.
Garske, Tini; Van Kerkhove, Maria D; Yactayo, Sergio; Ronveaux, Olivier; Lewis, Rosamund F; Staples, J Erin; Perea, William; Ferguson, Neil M
2014-05-01
Yellow fever is a vector-borne disease affecting humans and non-human primates in tropical areas of Africa and South America. While eradication is not feasible due to the wildlife reservoir, large scale vaccination activities in Africa during the 1940s to 1960s reduced yellow fever incidence for several decades. However, after a period of low vaccination coverage, yellow fever has resurged in the continent. Since 2006 there has been substantial funding for large preventive mass vaccination campaigns in the most affected countries in Africa to curb the rising burden of disease and control future outbreaks. Contemporary estimates of the yellow fever disease burden are lacking, and the present study aimed to update the previous estimates on the basis of more recent yellow fever occurrence data and improved estimation methods. Generalised linear regression models were fitted to a dataset of the locations of yellow fever outbreaks within the last 25 years to estimate the probability of outbreak reports across the endemic zone. Environmental variables and indicators for the surveillance quality in the affected countries were used as covariates. By comparing probabilities of outbreak reports estimated in the regression with the force of infection estimated for a limited set of locations for which serological surveys were available, the detection probability per case and the force of infection were estimated across the endemic zone. The yellow fever burden in Africa was estimated for the year 2013 as 130,000 (95% CI 51,000-380,000) cases with fever and jaundice or haemorrhage including 78,000 (95% CI 19,000-180,000) deaths, taking into account the current level of vaccination coverage. The impact of the recent mass vaccination campaigns was assessed by evaluating the difference between the estimates obtained for the current vaccination coverage and for a hypothetical scenario excluding these vaccination campaigns. Vaccination campaigns were estimated to have reduced the number of cases and deaths by 27% (95% CI 22%-31%) across the region, achieving up to an 82% reduction in countries targeted by these campaigns. A limitation of our study is the high level of uncertainty in our estimates arising from the sparseness of data available from both surveillance and serological surveys. With the estimation method presented here, spatial estimates of transmission intensity can be combined with vaccination coverage levels to evaluate the impact of past or proposed vaccination campaigns, thereby helping to allocate resources efficiently for yellow fever control. This method has been used by the Global Alliance for Vaccines and Immunization (GAVI Alliance) to estimate the potential impact of future vaccination campaigns.
Garske, Tini; Van Kerkhove, Maria D.; Yactayo, Sergio; Ronveaux, Olivier; Lewis, Rosamund F.; Staples, J. Erin; Perea, William; Ferguson, Neil M.
2014-01-01
Background Yellow fever is a vector-borne disease affecting humans and non-human primates in tropical areas of Africa and South America. While eradication is not feasible due to the wildlife reservoir, large scale vaccination activities in Africa during the 1940s to 1960s reduced yellow fever incidence for several decades. However, after a period of low vaccination coverage, yellow fever has resurged in the continent. Since 2006 there has been substantial funding for large preventive mass vaccination campaigns in the most affected countries in Africa to curb the rising burden of disease and control future outbreaks. Contemporary estimates of the yellow fever disease burden are lacking, and the present study aimed to update the previous estimates on the basis of more recent yellow fever occurrence data and improved estimation methods. Methods and Findings Generalised linear regression models were fitted to a dataset of the locations of yellow fever outbreaks within the last 25 years to estimate the probability of outbreak reports across the endemic zone. Environmental variables and indicators for the surveillance quality in the affected countries were used as covariates. By comparing probabilities of outbreak reports estimated in the regression with the force of infection estimated for a limited set of locations for which serological surveys were available, the detection probability per case and the force of infection were estimated across the endemic zone. The yellow fever burden in Africa was estimated for the year 2013 as 130,000 (95% CI 51,000–380,000) cases with fever and jaundice or haemorrhage including 78,000 (95% CI 19,000–180,000) deaths, taking into account the current level of vaccination coverage. The impact of the recent mass vaccination campaigns was assessed by evaluating the difference between the estimates obtained for the current vaccination coverage and for a hypothetical scenario excluding these vaccination campaigns. Vaccination campaigns were estimated to have reduced the number of cases and deaths by 27% (95% CI 22%–31%) across the region, achieving up to an 82% reduction in countries targeted by these campaigns. A limitation of our study is the high level of uncertainty in our estimates arising from the sparseness of data available from both surveillance and serological surveys. Conclusions With the estimation method presented here, spatial estimates of transmission intensity can be combined with vaccination coverage levels to evaluate the impact of past or proposed vaccination campaigns, thereby helping to allocate resources efficiently for yellow fever control. This method has been used by the Global Alliance for Vaccines and Immunization (GAVI Alliance) to estimate the potential impact of future vaccination campaigns. Please see later in the article for the Editors' Summary PMID:24800812
Quantifying UK emissions of carbon dioxide using an integrative measurement strategy
NASA Astrophysics Data System (ADS)
Gonzi, S.; Palmer, P.
2015-12-01
The main objective of the Greenhouse gAs Uk and Global Emissions (GAUGE) programme is to quantify the magnitude and uncertainty of CO2, CH4 and N2O fluxes from the UK. GAUGE builds on the tall tower network established by the UK Government to estimate fluxes from England, Northern Ireland, Scotland, and Wales. The GAUGE measurement programme includes two additional tall tower sites (one in North Yorkshire and one downwind of London); regular measurements of CO2 and CH4 isotopologues; instrumentation installed on a ferry that travels daily along the eastern coast of the UK from Scotland to Belgium; a research aircraft that has been deployed on a campaign basis; and a high-density network over East Anglia that is primarily focused on the agricultural sector. We have also included satellite observations from the Japanese Greenhouse gases Observing SATellite (GOSAT) through ongoing activities within the UK National Centre for Earth Observation. In this presentation, we will present new CO2 flux estimates for the UK inferred from GAUGE measurements using a nested, high-resolution (25 km) version of the GEOS-Chem atmospheric transport model and an ensemble Kalman filter. We will present our current best estimate for CO2 fluxes and a preliminary assessment of the efficacy of individual GAUGE data sources to spatially resolve CO2 flux estimates over the UK. We will also discuss how flux estimates inferred from the different models used within GAUGE can help to assess the role of transport model error and to determine an ensemble CO2 flux estimate for the UK.
A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes
2013-01-01
Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680
Two-Dimensional Analysis of Conical Pulsed Inductive Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, A. K.; Polzin, K. A.; Emsellem, G. D.
2011-01-01
A model of the maximum achievable exhaust velocity of a conical theta pinch pulsed inductive thruster is presented. A semi-empirical formula relating coil inductance to both axial and radial current sheet location is developed and incorporated into a circuit model coupled to a momentum equation to evaluate the effect of coil geometry on the axial directed kinetic energy of the exhaust. Inductance measurements as a function of the axial and radial displacement of simulated current sheets from four coils of different geometries are t to a two-dimensional expression to allow the calculation of the Lorentz force at any relevant averaged current sheet location. This relation for two-dimensional inductance, along with an estimate of the maximum possible change in gas-dynamic pressure as the current sheet accelerates into downstream propellant, enables the expansion of a one-dimensional circuit model to two dimensions. The results of this two-dimensional model indicate that radial current sheet motion acts to rapidly decouple the current sheet from the driving coil, leading to losses in axial kinetic energy 10-50 times larger than estimations of the maximum available energy in the compressed propellant. The decreased available energy in the compressed propellant as compared to that of other inductive plasma propulsion concepts suggests that a recovery in the directed axial kinetic energy of the exhaust is unlikely, and that radial compression of the current sheet leads to a loss in exhaust velocity for the operating conditions considered here.
Finding and estimating chemical property data for environmental assessment.
Boethling, Robert S; Howard, Philip H; Meylan, William M
2004-10-01
The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
Model-based cartilage thickness measurement in the submillimeter range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streekstra, G. J.; Strackee, S. D.; Maas, M.
2007-09-15
Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness wasmore » varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical sections. We present a method that yields virtually unbiased thickness estimates of cartilage layers in the submillimeter range. The good agreement of thickness estimates from CT images with estimates from anatomical sections is promising for clinical application of the method in cartilage integrity staging of the wrist and the ankle.« less
Assessing the Importance of Prior Biospheric Fluxes on Inverse Model Estimates of CO2
NASA Astrophysics Data System (ADS)
Philip, S.; Johnson, M. S.; Potter, C. S.; Genovese, V. B.
2017-12-01
Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric sources/sinks. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in models having significant differences in the quantification of biospheric CO2 fluxes. Currently, atmospheric chemical transport models (CTM) and global climate models (GCM) use multiple different biospheric CO2 flux models resulting in large differences in simulating the global carbon cycle. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission was designed to allow for the improved understanding of the processes involved in the exchange of carbon between terrestrial ecosystems and the atmosphere, and therefore allowing for more accurate assessment of the seasonal/inter-annual variability of CO2. OCO-2 provides much-needed CO2 observations in data-limited regions allowing for the evaluation of model simulations of greenhouse gases (GHG) and facilitating global/regional estimates of "top-down" CO2 fluxes. We conduct a 4-D Variation (4D-Var) data assimilation with the GEOS-Chem (Goddard Earth Observation System-Chemistry) CTM using 1) OCO-2 land nadir and land glint retrievals and 2) global in situ surface flask observations to constrain biospheric CO2 fluxes. We apply different state-of-the-science year-specific CO2 flux models (e.g., NASA-CASA (NASA-Carnegie Ames Stanford Approach), CASA-GFED (Global Fire Emissions Database), Simple Biosphere Model version 4 (SiB-4), and LPJ (Lund-Postdam-Jena)) to assess the impact of "a priori" flux predictions to "a posteriori" estimates. We will present the "top-down" CO2 flux estimates for the year 2015 using OCO-2 and in situ observations, and a complete indirect evaluation of the a priori and a posteriori flux estimates using independent in situ observations. We will also present our assessment of the variability of "top-down" CO2 flux estimates when using different biospheric CO2 flux models. This work will improve our understanding of the global carbon cycle, specifically, how OCO-2 observations can be used to constrain biospheric CO2 flux model estimates.
Online estimation of lithium-ion battery capacity using sparse Bayesian learning
NASA Astrophysics Data System (ADS)
Hu, Chao; Jain, Gaurav; Schmidt, Craig; Strief, Carrie; Sullivan, Melani
2015-09-01
Lithium-ion (Li-ion) rechargeable batteries are used as one of the major energy storage components for implantable medical devices. Reliability of Li-ion batteries used in these devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, patients and physicians. To ensure a Li-ion battery operates reliably, it is important to develop health monitoring techniques that accurately estimate the capacity of the battery throughout its life-time. This paper presents a sparse Bayesian learning method that utilizes the charge voltage and current measurements to estimate the capacity of a Li-ion battery used in an implantable medical device. Relevance Vector Machine (RVM) is employed as a probabilistic kernel regression method to learn the complex dependency of the battery capacity on the characteristic features that are extracted from the charge voltage and current measurements. Owing to the sparsity property of RVM, the proposed method generates a reduced-scale regression model that consumes only a small fraction of the CPU time required by a full-scale model, which makes online capacity estimation computationally efficient. 10 years' continuous cycling data and post-explant cycling data obtained from Li-ion prismatic cells are used to verify the performance of the proposed method.
Cost, Energy, and Environmental Impact of Automated Electric Taxi Fleets in Manhattan.
Bauer, Gordon S; Greenblatt, Jeffery B; Gerke, Brian F
2018-04-17
Shared automated electric vehicles (SAEVs) hold great promise for improving transportation access in urban centers while drastically reducing transportation-related energy consumption and air pollution. Using taxi-trip data from New York City, we develop an agent-based model to predict the battery range and charging infrastructure requirements of a fleet of SAEVs operating on Manhattan Island. We also develop a model to estimate the cost and environmental impact of providing service and perform extensive sensitivity analysis to test the robustness of our predictions. We estimate that costs will be lowest with a battery range of 50-90 mi, with either 66 chargers per square mile, rated at 11 kW or 44 chargers per square mile, rated at 22 kW. We estimate that the cost of service provided by such an SAEV fleet will be $0.29-$0.61 per revenue mile, an order of magnitude lower than the cost of service of present-day Manhattan taxis and $0.05-$0.08/mi lower than that of an automated fleet composed of any currently available hybrid or internal combustion engine vehicle (ICEV). We estimate that such an SAEV fleet drawing power from the current NYC power grid would reduce GHG emissions by 73% and energy consumption by 58% compared to an automated fleet of ICEVs.
Airborne vs. Inventory Measurements of Methane Emissions in the Alberta Upstream Oil and Gas Sector
NASA Astrophysics Data System (ADS)
Johnson, M.; Tyner, D. R.; Conley, S.; Schwietzke, S.; Zavala Araiza, D.
2017-12-01
Airborne measurements of methane emission rates were directly compared with detailed, spatially-resolved inventory estimates for different oil and gas production regions in Alberta, Canada. For a 50 km × 50 km region near Red Deer, Alberta, containing 2700 older gas and oil wells, measured methane emissions were 16 times higher than reported venting and flaring volumes would suggest, but consistent with regional inventory estimates (which include estimates for additional emissions from pneumatic equipment, fugitive leaks, gas migration, etc.). This result highlights how 94% of methane emissions in this region are attributable to sources missing from current reporting requirements. The comparison was even more stark for a 60 km × 60 km region near Lloydminster, dominated by 2300 cold heavy oil with sand (CHOPS) production sites. Aircraft measured methane emissions in this region were 5 times larger than that expected from reported venting and flaring volumes, and more than 3 times greater than regional inventory estimates. This significant discrepancy is most likely attributable to underreported intentional venting of casing gas at CHOPS sites, which is generally estimated based on the product of the measured produced oil volume and an assumed gas to oil ratio (GOR). GOR values at CHOPS sites can be difficult to measure and can be notoriously variable in time. Considering the implications for other CHOPS sites across Alberta only, the present results suggest that total reported venting in Alberta is low by a factor of 2.4 (range of 2.0-2.7) and total methane emissions from the conventional oil and gas sector (excluding mined oil sands) are likely at least 25-41% greater than currently estimated. This work reveals critical gaps in current measurement and reporting, while strongly supporting the need for urgent mitigation efforts in the context of newly proposed federal methane regulations in Canada, and separate regulatory development efforts in the province of Alberta.
Spatial-altitudinal and temporal variation of Degree Day Factors (DDFs) in the Upper Indus Basin
NASA Astrophysics Data System (ADS)
Khan, Asif; Attaullah, Haleema; Masud, Tabinda; Khan, Mujahid
2017-04-01
Melt contribution from snow and ice in the Hindukush-Karakoram-Himalayan (HKH) region could account for more than 80% of annual river flows in the Upper Indus Basin (UIB). Increase or decrease in precipitation, energy input and glacier reserves can significantly affect water resources of this region. Therefore improved hydrological modelling and accurate future water resources prediction are vital for food production and hydro-power generation for millions of people living downstream, and are intensively needed. In mountain regions Degree Day Factors (DDFs) significantly vary on spatial and altitudinal basis, and are primary inputs of temperature-based hydrological modelling. However previous studies have used different DDFs as calibration parameters without due attention to the physical meaning of the values employed, and these estimates possess significant variability and uncertainty. This study provides estimates of DDFs for various altitudinal zones in the UIB at sub-basin level. Snow, clean ice and ice with debris cover bear different melt rates (or DDFs), therefore areally-averaged DDFs based on snow, clean and debris-covered ice classes in various altitudinal zones have been estimated for all sub-basins of the UIB. Zonal estimates of DDFs in the current study are significantly different from earlier adopted DDFs, hence suggest a revisit of previous hydrological modelling studies. DDFs presented in current study have been validated by using Snowmelt Runoff Model (SRM) in various sub-basins with good Nash Sutcliffe coefficients (R2 > 0.85) and low volumetric errors (Dv<10%). DDFs and methods provided in the current study can be used in future improved hydrological modelling and to provide accurate predictions of future river flows changes. The methodology used for estimation of DDFs is robust, and can be adopted to produce such estimates in other regions of the, particularly in the nearby other HKH basins.
Kendall, Matthew S; Poti, Matt; Karnauskas, Kristopher B
2016-04-01
Changes in larval import, export, and self-seeding will affect the resilience of coral reef ecosystems. Climate change will alter the ocean currents that transport larvae and also increase sea surface temperatures (SST), hastening development, and shortening larval durations. Here, we use transport simulations to estimate future larval connectivity due to: (1) physical transport of larvae from altered circulation alone, and (2) the combined effects of altered currents plus physiological response to warming. Virtual larvae from islands throughout Micronesia were moved according to present-day and future ocean circulation models. The Hybrid Coordinate Ocean Model (HYCOM) spanning 2004-2012 represented present-day currents. For future currents, we altered HYCOM using analysis from the National Center for Atmospheric Research Community Earth System Model, version 1-Biogeochemistry, Representative Concentration Pathway 8.5 experiment. Based on the NCAR model, regional SST is estimated to rise 2.74 °C which corresponds to a ~17% decline in larval duration for some taxa. This reduction was the basis for a separate set of simulations. Results predict an increase in self-seeding in 100 years such that 62-76% of islands experienced increased self-seeding, there was an average domainwide increase of ~1-3% points in self-seeding, and increases of up to 25% points for several individual islands. When changed currents alone were considered, approximately half (i.e., random) of all island pairs experienced decreased connectivity but when reduced PLD was added as an effect, ~65% of connections were weakened. Orientation of archipelagos relative to currents determined the directional bias in connectivity changes. There was no universal relationship between climate change and connectivity applicable to all taxa and settings. Islands that presently export large numbers of larvae but that also maintain or enhance this role into the future should be the focus of conservation measures that promote long-term resilience of larval supply. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
MUCHFUSS: Status and Highlights
NASA Astrophysics Data System (ADS)
Geier, S.; Kupfer, T.; Barlow, B.; Schaffenroth, V.; Fürst, F.; Heuser, C.; Ziegerer, E.; Heber, U.; Marsh, T.; Maxted, P.; Östensen, R.; O'Toole, S.; Gänsicke, B.; Napiwotzki, R.
2014-04-01
The MUCHFUSS project aims at finding sdBs with massive compact companions. Here we report on the current status of our spectroscopic and photometric follow-up campaigns and present some highlight results. We derive orbital solutions of seven new sdB binaries and estimate the fraction of close substellar companions to sdBs. Finally, we present an ultracompact sdB+WD binary as possible progenitor of a thermonuclear supernova and connect it to the only known hypervelocity subdwarf star, which might be the donor remnant of such an event.
A Self-Tuning Kalman Filter for Autonomous Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Truong, Son H.
1998-01-01
Most navigation systems currently operated by NASA are ground-based, and require extensive support to produce accurate results. Recently developed systems that use Kalman Filter and Global Positioning System (GPS) data for orbit determination greatly reduce dependency on ground support, and have potential to provide significant economies for NASA spacecraft navigation. Current techniques of Kalman filtering, however, still rely on manual tuning from analysts, and cannot help in optimizing autonomy without compromising accuracy and performance. This paper presents an approach to produce a high accuracy autonomous navigation system fully integrated with the flight system. The resulting system performs real-time state estimation by using an Extended Kalman Filter (EKF) implemented with high-fidelity state dynamics model, as does the GPS Enhanced Orbit Determination Experiment (GEODE) system developed by the NASA Goddard Space Flight Center. Augmented to the EKF is a sophisticated neural-fuzzy system, which combines the explicit knowledge representation of fuzzy logic with the learning power of neural networks. The fuzzy-neural system performs most of the self-tuning capability and helps the navigation system recover from estimation errors. The core requirement is a method of state estimation that handles uncertainties robustly, capable of identifying estimation problems, flexible enough to make decisions and adjustments to recover from these problems, and compact enough to run on flight hardware. The resulting system can be extended to support geosynchronous spacecraft and high-eccentricity orbits. Mathematical methodology, systems and operations concepts, and implementation of a system prototype are presented in this paper. Results from the use of the prototype to evaluate optimal control algorithms implemented are discussed. Test data and major control issues (e.g., how to define specific roles for fuzzy logic to support the self-learning capability) are also discussed. In addition, architecture of a complete end-to-end candidate flight system that provides navigation with highly autonomous control using data from GPS is presented.
Measuring the critical band for speech.
Healy, Eric W; Bacon, Sid P
2006-02-01
The current experiments were designed to measure the frequency resolution employed by listeners during the perception of everyday sentences. Speech bands having nearly vertical filter slopes and narrow bandwidths were sharply partitioned into various numbers of equal log- or ERBN-width subbands. The temporal envelope from each partition was used to amplitude modulate a corresponding band of low-noise noise, and the modulated carriers were combined and presented to normal-hearing listeners. Intelligibility increased and reached asymptote as the number of partitions increased. In the mid- and high-frequency regions of the speech spectrum, the partition bandwidth corresponding to asymptotic performance matched current estimates of psychophysical tuning across a number of conditions. These results indicate that, in these regions, the critical band for speech matches the critical band measured using traditional psychoacoustic methods and nonspeech stimuli. However, in the low-frequency region, partition bandwidths at asymptote were somewhat narrower than would be predicted based upon psychophysical tuning. It is concluded that, overall, current estimates of psychophysical tuning represent reasonably well the ability of listeners to extract spectral detail from running speech.
Mixture distributions of wind speed in the UAE
NASA Astrophysics Data System (ADS)
Shin, J.; Ouarda, T.; Lee, T. S.
2013-12-01
Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.
Forty and 80 GHz technology assessment and forecast including executive summary
NASA Technical Reports Server (NTRS)
Mazur, D. G.; Mackey, R. J., Jr.; Tanner, S. G.; Altman, F. J.; Nicholas, J. J., Jr.; Duchaine, K. A.
1976-01-01
The results of a survey to determine current demand and to forecast growth in demand for use of the 40 and 80 GHz bands during the 1980-2000 time period are given. The current state-of-the-art is presented, as well as the technology requirements of current and projected services. Potential developments were identified, and a forecast is made. The impacts of atmospheric attenuation in the 40 and 80 GHz bands were estimated for both with and without diversity. Three services for the 1980-2000 time period -- interactive television, high quality three stereo pair audio, and 30 MB data -- are given with system requirements and up and down-link calculations.
Making the Case for Reusable Booster Systems: The Operations Perspective
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2012-01-01
Presentation to the Aeronautics Space Engineering Board National Research Council Reusable Booster System: Review and Assessment Committee. Addresses: the criteria and assumptions used in the formulation of current RBS plans; the methodologies used in the current cost estimates for RBS; the modeling methodology used to frame the business case for an RBS capability including: the data used in the analysis, the models' robustness if new data become available, and the impact of unclassified government data that was previously unavailable and which will be supplied by the USAF; the technical maturity of key elements critical to RBS implementation and the ability of current technology development plans to meet technical readiness milestones.
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
NASA Astrophysics Data System (ADS)
Steer, Ian
2017-01-01
Redshift-independent extragalactic distance estimates are used by researchers to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006, the NASA/IPAC Extragalactic Database (NED) began providing users with a comprehensive tabulation of the redshift-independent extragalactic distance estimates published in the astronomical literature since 1980. A decade later, this compendium of distances (NED-D) surpassed 100,000 estimates for 28,000 galaxies, as reported in our recent journal article (Steer et al. 2016). Here, we are pleased to report NED-D has surpassed 166,000 distance estimates for 77,000 galaxies. Visualizations of the growth in data and of the statistical distributions of the most used distance indicators will be presented, along with an overview of the new data responsible for the most recent growth. We conclude with an outline of NED’s current plans to facilitate extragalactic research further by making greater use of redshift-independent distances. Additional information about other extensive updates to NED is presented at this meeting by Mazzarella et al. (2017). NED is operated by and this research is funded by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Gas composition sensing using carbon nanotube arrays
NASA Technical Reports Server (NTRS)
Li, Jing (Inventor); Meyyappan, Meyya (Inventor)
2008-01-01
A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.
Veneman, Jolien B; Saetnan, Eli R; Clare, Amanda J; Newbold, Charles J
2016-12-01
The body of peer-reviewed papers on enteric methane mitigation strategies in ruminants is rapidly growing and allows for better estimation of the true effect of each strategy though the use of meta-analysis methods. Here we present the development of an online database of measured methane mitigation strategies called MitiGate, currently comprising 412 papers. The database is accessible through an online user-friendly interface that allows data extraction with various levels of aggregation on one hand and data-uploading for submission to the database allowing for future refinement and updates of mitigation estimates as well as providing easy access to relevant data for integration into modelling efforts or policy recommendations. To demonstrate and verify the usefulness of the MitiGate database those studies where methane emissions were expressed per unit of intake (293 papers resulting in 845 treatment comparisons) were used in a meta-analysis. The meta-analysis of the current database estimated the effect size of each of the mitigation strategies as well as the associated variance and measure of heterogeneity. Currently, under-representation of certain strategies, geographic regions and long term studies are the main limitations in providing an accurate quantitative estimation of the mitigation potential of each strategy under varying animal production systems. We have thus implemented the facility for researchers to upload meta-data of their peer reviewed research through a simple input form in the hope that MitiGate will grow into a fully inclusive resource for those wishing to model methane mitigation strategies in ruminants. Copyright © 2016 Elsevier B.V. All rights reserved.
Current and future avoidable cost of smoking--estimates for Sweden 2007.
Bolin, Kristian; Borgman, Benny; Gip, Christina; Wilson, Koo
2011-11-01
To estimate current and future avoidable smoking-attributable costs in Sweden for the year 2007. Disease specific smoking-attributable proportions were calculated for Swedish smoking patterns and applied to estimate costs for smoking-related diseases based on data from public registers. Avoidable future effects of smoking were calculated employing a Markov simulation model. The estimated total cost in 2007 was USD 1.6 billion, or USD 181 per capita. Healthcare (direct) cost accounted for 30% of the total cost. The number of deaths was 97 per 100,000 inhabitants (79 in 2001); the number of years of potential life lost 1,227 per 100,000 inhabitants (1012 in 2001); and the number of years of potential productive life lost 226 (185 in 2001) per 100,000 inhabitants. Avoidable future lifetime costs, per 100,000 inhabitants, amounted to USD 19 million (healthcare), 14,000 years of potential life lost, corresponding to a present value of USD 158 million. Total avoidable cost of current smoking amounted to USD 16 billion. In spite of declining smoking-prevalence rates during the last 30 years, smoking-attributable deaths increased between 2001 and 2007. The number of life years lost per death decreased somewhat, indicating that the age distribution of those dying shifted further towards older age. Simulations indicate that smoking-cessation among young smokers yields considerable more benefits each year than smoking-cessation among older smokers. The health benefits that accrued in 2007, as a result of declining smoking prevalence since 1980, correspond to more than the total cost of smoking in that year. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Comparison of in-situ and optical current-meter estimates of rip-current circulation
NASA Astrophysics Data System (ADS)
Moulton, M.; Chickadel, C. C.; Elgar, S.; Raubenheimer, B.
2016-12-01
Rip currents are fast, narrow, seaward flows that transport material from the shoreline to the shelf. Spatially and temporally complex rip current circulation patterns are difficult to resolve with in-situ instrument arrays. Here, high spatial-resolution estimates of rip current circulation from remotely sensed optical images of the sea surface are compared with in-situ estimates of currents in and near channels ( 1- to 2-m deep and 30-m wide) dredged across the surf zone. Alongshore flows are estimated using the optical current-meter method, and cross-shore flows are derived with the assumption of continuity. The observations span a range of wave conditions, tidal elevations, and flow patterns, including meandering alongshore currents near and in the channel, and 0.5 m/s alongshore flows converging at a 0.8 m/s rip jet in the channel. In addition, the remotely sensed velocities are used to investigate features of the spatially complex flow patterns not resolved by the spatially sparse in-situ sensors, including the spatial extent of feeder current zones and the width, alongshore position, and cross-shore extent of rip current jets. Funded by ASD(R&E) and NSF.
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Regression model estimation of early season crop proportions: North Dakota, some preliminary results
NASA Technical Reports Server (NTRS)
Lin, K. K. (Principal Investigator)
1982-01-01
To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.
Sequential estimation and satellite data assimilation in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Ghil, M.
1986-01-01
The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.
Gulf stream velocity structure through combined inversion of hydrographic and acoustic Doppler data
NASA Technical Reports Server (NTRS)
Pierce, S. D.
1986-01-01
Near-surface velocities from an acoustic Doppler instrument are used in conjunction with CTD/O2 data to produce estimates of the absolute flow field off Cape Hatteras. The data set consists of two transects across the Gulf Stream made by the R/V Endeavor cruise EN88 in August 1982. An inverse procedure is applied which makes use of both the acoustic Doppler data and property conservation constraints. Velocity sections at approximately 73 deg. W and 71 deg. W are presented with formal errors of 1-2 cm/s. The net Gulf Stream transports are estimated to be 116 + or - 2 Sv across the south leg and 161 + or - 4 Sv across the north. A Deep Western Boundary Current transport of 4 + or - 1 Sv is also estimated. While these values do not necessarily represent the mean, they are accurate estimates of the synoptic flow field in the region.
Estimating index of refraction from polarimetric hyperspectral imaging measurements.
Martin, Jacob A; Gross, Kevin C
2016-08-08
Current material identification techniques rely on estimating reflectivity or emissivity which vary with viewing angle. As off-nadir remote sensing platforms become increasingly prevalent, techniques robust to changing viewing geometries are desired. A technique leveraging polarimetric hyperspectral imaging (P-HSI), to estimate complex index of refraction, N̂(ν̃), an inherent material property, is presented. The imaginary component of N̂(ν̃) is modeled using a small number of "knot" points and interpolation at in-between frequencies ν̃. The real component is derived via the Kramers-Kronig relationship. P-HSI measurements of blackbody radiation scattered off of a smooth quartz window show that N̂(ν̃) can be retrieved to within 0.08 RMS error between 875 cm-1 ≤ ν̃ ≤ 1250 cm-1. P-HSI emission measurements of a heated smooth Pyrex beaker also enable successful N̂(ν̃) estimates, which are also invariant to object temperature.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO
GOEL, SHARAD; SALGANIK, MATTHEW J.
2013-01-01
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381
Respondent-driven sampling as Markov chain Monte Carlo.
Goel, Sharad; Salganik, Matthew J
2009-07-30
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.
NASA Astrophysics Data System (ADS)
Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross
2014-05-01
This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the shortcomings of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-the-art in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority weaknesses within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Weller, J.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.
Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less
Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less
NASA Astrophysics Data System (ADS)
Sassi, M. G.; Hoitink, A. J. F.; Vermeulen, B.; Hidayat, null
2011-06-01
Horizontal acoustic Doppler current profilers (H-ADCPs) can be employed to estimate river discharge based on water level measurements and flow velocity array data across a river transect. A new method is presented that accounts for the dip in velocity near the water surface, which is caused by sidewall effects that decrease with the width to depth ratio of a channel. A boundary layer model is introduced to convert single-depth velocity data from the H-ADCP to specific discharge. The parameters of the model include the local roughness length and a dip correction factor, which accounts for the sidewall effects. A regression model is employed to translate specific discharge to total discharge. The method was tested in the River Mahakam, representing a large river of complex bathymetry, where part of the flow is intrinsically three-dimensional and discharge rates exceed 8000 m3 s-1. Results from five moving boat ADCP campaigns covering separate semidiurnal tidal cycles are presented, three of which are used for calibration purposes, whereas the remaining two served for validation of the method. The dip correction factor showed a significant correlation with distance to the wall and bears a strong relation to secondary currents. The sidewall effects appeared to remain relatively constant throughout the tidal cycles under study. Bed roughness length is estimated at periods of maximum velocity, showing more variation at subtidal than at intratidal time scales. Intratidal variations were particularly obvious during bidirectional flow conditions, which occurred only during conditions of low river discharge. The new method was shown to outperform the widely used index velocity method by systematically reducing the relative error in the discharge estimates.
Prasad, G V R
2009-11-01
This paper presents a brief review of recent advances in the classification of mammals at higher levels using fossils and molecular clocks. It also discusses latest fossil discoveries from the Cretaceous - Eocene (66-55 m.y.) rocks of India and their relevance to our current understanding of placental mammal origins and diversifications.
NASA Technical Reports Server (NTRS)
Hopkins, Randy
2008-01-01
This slide presentation reviews the mission concept for the proposed Xenia mission. The mission's ground rules and assumptions for the mission analysis, attitude and orbit control, propulsion, avionics, power, and the thermal controls are reviewed, partially to determine the appropriate launch vehicle that will be used. A current design plan for the mission is shown assuming 6 GRB detectors and estimates for structures are reviewed.
Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz
2005-01-01
We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...
Jerome L. Clutter; William R. Harms; Graham H. Brister; John W. Reney
1984-01-01
Equations and tables are presented for estimating total and merchantable volumes and weights of loblolly pine planted on prepared sites in the Lower Atlantic Coastal Plain.The equation system can be used to predict current and projected yields in cubic feet and green and dry weights.
Leptogenesis from heavy right-handed neutrinos in CPT violating backgrounds
NASA Astrophysics Data System (ADS)
Bossingham, Thomas; Mavromatos, Nick E.; Sarkar, Sarben
2018-02-01
We discuss leptogenesis in a model with heavy right-handed Majorana neutrinos propagating in a constant but otherwise generic CPT-violating axial time-like background (motivated by string theory). At temperatures much higher than the temperature of the electroweak phase transition, we solve approximately, but analytically (using Padé approximants), the corresponding Boltzmann equations, which describe the generation of lepton asymmetry from the tree-level decays of heavy neutrinos into Standard Model leptons. At such temperatures these leptons are effectively massless. The current work completes in a rigorous way a preliminary treatment of the same system, by some of the present authors. In this earlier work, lepton asymmetry was crudely estimated considering the decay of a right-handed neutrino at rest. Our present analysis includes thermal momentum modes for the heavy neutrino and this leads to a total lepton asymmetry which is bigger by a factor of two as compared to the previous estimate. Nevertheless, our current and preliminary results for the freezeout are found to be in agreement (within a ˜ 12.5% uncertainty). Our analysis depends on a novel use of Padé approximants to solve the Boltzmann equations and may be more widely useful in cosmology.
Drying of Durum Wheat Pasta and Enriched Pasta: A Review of Modeling Approaches.
Mercier, Samuel; Mondor, Martin; Moresoli, Christine; Villeneuve, Sébastien; Marcos, Bernard
2016-05-18
Models on drying of durum wheat pasta and enriched pasta were reviewed to identify avenues for improvement according to consumer needs, product formulation and processing conditions. This review first summarized the fundamental phenomena of pasta drying, mass transfer, heat transfer, momentum, chemical changes, shrinkage and crack formation. The basic equations of the current models were then presented, along with methods for the estimation of pasta transport and thermodynamic properties. The experimental validation of these models was also presented and highlighted the need for further model validation for drying at high temperatures (>-100°C) and for more accurate estimation of the pasta diffusion and mass transfer coefficients. This review indicates the need for the development of mechanistic models to improve our understanding of the mass and heat transfer mechanisms involved in pasta drying, and to consider the local changes in pasta transport properties and relaxation time for more accurate description of the moisture transport near glass transition conditions. The ability of current models to describe dried pasta quality according to the consumers expectations or to predict the impact of incorporating ingredients high in nutritional value on the drying of these enriched pasta was also discussed.
Water vapour tomography using GPS phase observations: Results from the ESCOMPTE experiment
NASA Astrophysics Data System (ADS)
Nilsson, T.; Gradinarsky, L.; Elgered, G.
2007-10-01
Global Positioning System (GPS) tomography is a technique for estimating the 3-D structure of the atmospheric water vapour using data from a dense local network of GPS receivers. Several current methods utilize estimates of slant wet delays between the GPS satellites and the receivers on the ground, which are difficult to obtain with millimetre accuracy from the GPS observations. We present results of applying a new tomographic method to GPS data from the Expériance sur site pour contraindre les modèles de pollution atmosphérique et de transport d'emissions (ESCOMPTE) experiment in southern France. This method does not rely on any slant wet delay estimates, instead it uses the GPS phase observations directly. We show that the estimated wet refractivity profiles estimated by this method is on the same accuracy level or better compared to other tomographic methods. The results are in agreement with earlier simulations, for example the profile information is limited above 4 km.
Entropy-based adaptive attitude estimation
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.
2018-03-01
Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.
Remote sensing in Iowa agriculture. [cropland inventory, soils, forestland, and crop diseases
NASA Technical Reports Server (NTRS)
Mahlstede, J. P. (Principal Investigator); Carlson, R. E.
1973-01-01
The author has identified the following significant results. Results include the estimation of forested and crop vegetation acreages using the ERTS-1 imagery. The methods used to achieve these estimates still require refinement, but the results appear promising. Practical applications would be directed toward achieving current land use inventories of these natural resources. This data is presently collected by sampling type surveys. If ERTS-1 can observe this and area estimates can be determined accurately, then a step forward has been achieved. Cost benefit relationship will have to be favorable. Problems still exist in these estimation techniques due to the diversity of the scene observed in the ERTS-1 imagery covering other part of Iowa. This is due to influence of topography and soils upon the adaptability of the vegetation to specific areas of the state. The state mosaic produced from ERTS-1 imagery shows these patterns very well. Research directed to acreage estimates is continuing.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Break-up of the Atlantic deep western boundary current into eddies at 8 degrees S.
Dengler, M; Schott, F A; Eden, C; Brandt, P; Fischer, J; Zantopp, R J
2004-12-23
The existence in the ocean of deep western boundary currents, which connect the high-latitude regions where deep water is formed with upwelling regions as part of the global ocean circulation, was postulated more than 40 years ago. These ocean currents have been found adjacent to the continental slopes of all ocean basins, and have core depths between 1,500 and 4,000 m. In the Atlantic Ocean, the deep western boundary current is estimated to carry (10-40) x 10(6) m3 s(-1) of water, transporting North Atlantic Deep Water--from the overflow regions between Greenland and Scotland and from the Labrador Sea--into the South Atlantic and the Antarctic circumpolar current. Here we present direct velocity and water mass observations obtained in the period 2000 to 2003, as well as results from a numerical ocean circulation model, showing that the Atlantic deep western boundary current breaks up at 8 degrees S. Southward of this latitude, the transport of North Atlantic Deep Water into the South Atlantic Ocean is accomplished by migrating eddies, rather than by a continuous flow. Our model simulation indicates that the deep western boundary current breaks up into eddies at the present intensity of meridional overturning circulation. For weaker overturning, continuation as a stable, laminar boundary flow seems possible.
State-Dependent Pseudo-Linear Filter for Spacecraft Attitude and Rate Estimation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This paper presents the development and performance of a special algorithm for estimating the attitude and angular rate of a spacecraft. The algorithm is a pseudo-linear Kalman filter, which is an ordinary linear Kalman filter that operates on a linear model whose matrices are current state estimate dependent. The nonlinear rotational dynamics equation of the spacecraft is presented in the state space as a state-dependent linear system. Two types of measurements are considered. One type is a measurement of the quaternion of rotation, which is obtained from a newly introduced star tracker based apparatus. The other type of measurement is that of vectors, which permits the use of a variety of vector measuring sensors like sun sensors and magnetometers. While quaternion measurements are related linearly to the state vector, vector measurements constitute a nonlinear function of the state vector. Therefore, in this paper, a state-dependent linear measurement equation is developed for the vector measurement case. The state-dependent pseudo linear filter is applied to simulated spacecraft rotations and adequate estimates of the spacecraft attitude and rate are obtained for the case of quaternion measurements as well as of vector measurements.
Oscillometric Blood Pressure Estimation: Past, Present, and Future.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2015-01-01
The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.
How Will Copper Contamination Constrain Future Global Steel Recycling?
Daehn, Katrin E; Cabrera Serrenho, André; Allwood, Julian M
2017-06-06
Copper in steel causes metallurgical problems, but is pervasive in end-of-life scrap and cannot currently be removed commercially once in the melt. Contamination can be managed to an extent by globally trading scrap for use in tolerant applications and dilution with primary iron sources. However, the viability of long-term strategies can only be evaluated with a complete characterization of copper in the global steel system and this is presented in this paper. The copper concentration of flows along the 2008 steel supply chain is estimated from a survey of literature data and compared with estimates of the maximum concentration that can be tolerated in steel products. Estimates of final steel demand and scrap supply by sector are taken from a global stock-saturation model to determine when the amount of copper in the steel cycle will exceed that which can be tolerated. Best estimates show that quantities of copper arising from conventional scrap preparation can be managed in the global steel system until 2050 assuming perfectly coordinated trade and extensive dilution, but this strategy will become increasingly impractical. Technical and policy interventions along the supply chain are presented to close product loops before this global constraint.
The versatility of a truss mounted mobile transporter for in-space construction
NASA Technical Reports Server (NTRS)
Bush, Harold G.; Lake, Mark S.; Watson, Judith J.; Heard, Walter L., Jr.
1988-01-01
The Mobile Transporter (MT) evolution from early erectable structures assembly activities is detailed. The MT operational features which are required to support astronauts performing on-orbit structure construction or spacecraft assembly functions are presented and discussed. Use of the MT to perform a variety of assembly functions is presented. Estimated EVA assembly times for a precision segmented reflector approximately 20 m in diameter are presented. The EVA/MT technique under study for construction of the reflector (and the entire spacecraft) is illustrated. Finally, the current status of development activities and test results involving the MT and Space Station structural assembly are presented.
NASA Astrophysics Data System (ADS)
Gajek, Andrzej
2016-09-01
The article presents diagnostics monitor for control of the efficiency of brakes in various road conditions in cars equipped with pressure sensor in brake (ESP) system. Now the brake efficiency of the vehicles is estimated periodically in the stand conditions on the base of brake forces measurement or in the road conditions on the base of the brake deceleration. The presented method allows to complete the stand - periodical tests of the brakes by current on board diagnostics system OBD for brakes. First part of the article presents theoretical dependences between deceleration of the vehicle and brake pressure. The influence of the vehicle mass, initial speed of braking, temperature of brakes, aerodynamic drag, rolling resistance, engine resistance, state of the road surface, angle of the road sloping on the deceleration have been analysed. The manner of the appointed of these parameters has been analysed. The results of the initial investigation have been presented. At the end of the article the strategy of the estimation and signalization of the irregular value of the deceleration are presented.
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented.
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented. PMID:29623057
NASA Astrophysics Data System (ADS)
Gottschalk, P.; Churkina, G.; Wattenbach, M.; Cubasch, U.
2010-12-01
The impact of urban systems on current and future global carbon emissions has been a focus of several studies. Many mitigation options in terms of increasing energy efficiency are discussed. However, apart from technical mitigation potential urban systems also have a considerable biogenic potential to mitigate carbon through an optimized management of organic carbon pools of vegetation and soil. Berlin city area comprises almost 50% of areas covered with vegetation or largely covered with vegetation. This potentially offers various areas for carbon mitigation actions. To assess the mitigation potentials our first objective is to estimate how large current vegetation and soil carbon stocks of Berlin are. We use publicly available forest and soil inventories to calculate soil organic carbon of non-pervious areas and forest standing biomass carbon. This research highlights data-gaps and assigns uncertainty ranges to estimated carbon resources. The second objective is to assess the carbon mitigation potential of Berlin’s vegetation and soils using a biogeochemical simulation model. BIOME-BGC simulates carbon-, nitrogen- and water-fluxes of ecosystems mechanistically. First, its applicability for Berlin forests is tested at selected sites. A spatial application gives an estimate of current net carbon fluxes. The application of such a model allows determining the sensitivity of key ecosystem processes (e.g. carbon gains through photosynthesis, carbon losses through decomposition) towards external drivers. This information can then be used to optimise forest management in terms of carbon mitigation. Initial results of Berlin’s current carbon stocks and its spatial distribution and preliminary simulations results will be presented.
Wells, Ruth; Swaminathan, Vaidy; Sundram, Suresh; Weinberg, Danielle; Bruggemann, Jason; Jacomb, Isabella; Cropley, Vanessa; Lenroot, Rhoshel; Pereira, Avril M; Zalesky, Andrew; Bousman, Chad; Pantelis, Christos; Weickert, Cynthia Shannon; Weickert, Thomas W
2015-01-01
Background: Cognitive heterogeneity among people with schizophrenia has been defined on the basis of premorbid and current intelligence quotient (IQ) estimates. In a relatively large, community cohort, we aimed to independently replicate and extend cognitive subtyping work by determining the extent of symptom severity and functional deficits in each group. Methods: A total of 635 healthy controls and 534 patients with a diagnosis of schizophrenia or schizoaffective disorder were recruited through the Australian Schizophrenia Research Bank. Patients were classified into cognitive subgroups on the basis of the Wechsler Test of Adult Reading (a premorbid IQ estimate) and current overall cognitive abilities into preserved, deteriorated, and compromised groups using both clinical and empirical (k-means clustering) methods. Additional cognitive, functional, and symptom outcomes were compared among the resulting groups. Results: A total of 157 patients (29%) classified as ‘preserved’ performed within one s.d. of control means in all cognitive domains. Patients classified as ‘deteriorated’ (n=239, 44%) performed more than one s.d. below control means in all cognitive domains except estimated premorbid IQ and current visuospatial abilities. A separate 138 patients (26%), classified as ‘compromised,’ performed more than one s.d. below control means in all cognitive domains and displayed greater impairment than other groups on symptom and functional measures. Conclusions: In the present study, we independently replicated our previous cognitive classifications of people with schizophrenia. In addition, we extended previous work by demonstrating worse functional outcomes and symptom severity in the compromised group. PMID:27336046
Jet noise suppressor nozzle development for augmentor wing jet STOL research aircraft (C-8A Buffalo)
NASA Technical Reports Server (NTRS)
Harkonen, D. L.; Marks, C. C.; Okeefe, J. V.
1974-01-01
Noise and performance test results are presented for a full-scale advanced design rectangular array lobe jet suppressor nozzle (plain wall and corrugated). Flight design and installation considerations are also discussed. Noise data are presented in terms of peak PNLT (perceived noise level, tone corrected) suppression relative to the existing airplane and one-third octave-band spectra. Nozzle performance is presented in terms of velocity coefficient. Estimates of the hot thrust available during emergency (engine out) with the suppressor nozzle installed are compared with the current thrust levels produced by the round convergent nozzles.
Westenbroek, Stephen M.
2006-01-01
Turbulent shear stress in the boundary layer of a natural river system largely controls the deposition and resuspension of sediment, as well as the longevity and effectiveness of granular-material caps used to cover and isolate contaminated sediments. This report documents measurements and calculations made in order to estimate shear stress and shear velocity on the Lower Fox River, Wisconsin. Velocity profiles were generated using an acoustic Doppler current profiler (ADCP) mounted on a moored vessel. This method of data collection yielded 158 velocity profiles on the Lower Fox River between June 2003 and November 2004. Of these profiles, 109 were classified as valid and were used to estimate the bottom shear stress and velocity using log-profile and turbulent kinetic energy methods. Estimated shear stress ranged from 0.09 to 10.8 dynes per centimeter squared. Estimated coefficients of friction ranged from 0.001 to 0.025. This report describes both the field and data-analysis methods used to estimate shear-stress parameters for the Lower Fox River. Summaries of the estimated values for bottom shear stress, shear velocity, and coefficient of friction are presented. Confidence intervals about the shear-stress estimates are provided.
Westra, Tjalke A; Parouty, Mehraj; Brouwer, Werner B; Beutels, Philippe H; Rogoza, Raina M; Rozenbaum, Mark H; Daemen, Toos; Wilschut, Jan C; Boersma, Cornelis; Postma, Maarten J
2012-05-01
Discounting has long been a matter of controversy in the field of health economic evaluations. How to weigh future health effects has resulted in ongoing discussions. These discussions are imminently relevant for health care interventions with current costs but future benefits. Different approaches to discount health effects have been proposed. In this study, we estimated the impact of different approaches for discounting health benefits of human papillomavirus (HPV) vaccination. An HPV model was used to estimate the impact of different discounting approaches on the present value of health effects. For the constant discount approaches, we varied the discount rate for health effects ranging from 0% to 4%. Next, the impact of relevant alternative discounting approaches was estimated, including hyperbolic, proportional, stepwise, and time-shifted discounting. The present value of health effects gained through HPV vaccination varied strongly when varying discount rates and approaches. The application of the current Dutch guidelines resulted in a present value of health effects that was eight or two times higher than that produced when using the proportional discounting approach or when using the internationally more common 4% discount rate for health effects, respectively. Obviously, such differences translate into large variations in corresponding incremental cost-effectiveness ratios. The exact discount rate and approach chosen in an economic evaluation importantly impact the projected value of health benefits of HPV vaccination. Investigating alternative discounting approaches in health-economic analysis is important, especially for vaccination programs yielding health effects far into the future. Our study underlines the relevance of ongoing discussions on how and at what rates to discount. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Weidner, E. F.; Weber, T. C.; Mayer, L. A.
2017-12-01
Quantifying methane flux originating from marine seep systems in climatically sensitive regions is of critically importance for current and future climate studies. Yet, the methane contribution from these systems has been difficult to estimate given the broad spatial scale of the ocean and the heterogeneity of seep activity. One such region is the Eastern Siberian Arctic Sea (ESAS), where bubble release into the shallow water column (<40 meters average depth) facilitates transport of methane to the atmosphere without oxidation. Quantifying the current seep methane flux from the ESAS is necessary to understand not only the total ocean methane budget, but also to provide baseline estimates against which future climate-induced changes can be measured. At the 2016 AGU fall meeting, we presented a new acoustic-based flux methodology using a calibrated broadband split-beam echosounder. The broad (14-24 kHz) bandwidth provides a vertical resolution of 10 cm, making possible the identification of single bubbles. After calibration using 64 mm copper sphere of known backscatter, the acoustic backscatter of individual bubbles is measured and compared to analytical models to estimate bubble radius. Additionally, bubbles are precisely located and traced upwards through the water column to estimate rise velocity. The combination of radius and rise velocity allows for gas flux estimation. Here, we follow up with the completed implementation of this methodology applied to the Herald Canyon region of the western ESAS. From the 68 recognized seeps, bubble radii and rise velocity were computed for more than 550 individual bubbles. The range of bubble radii, 1-6 mm, is comparable to those published by other investigators, while the radius dependent rise velocities are consistent with published models. Methane flux for the Herald Canyon region was estimated by extrapolation from individual seep flux values.
Cost of fetal alcohol spectrum disorder diagnosis in Canada.
Popova, Svetlana; Lange, Shannon; Burd, Larry; Chudley, Albert E; Clarren, Sterling K; Rehm, Jürgen
2013-01-01
Fetal Alcohol Spectrum Disorder (FASD) is underdiagnosed in Canada. The diagnosis of FASD is not simple and currently, the recommendation is that a comprehensive, multidisciplinary assessment of the individual be done. The purpose of this study was to estimate the annual cost of FASD diagnosis on Canadian society. The diagnostic process breakdown was based on recommendations from the Fetal Alcohol Spectrum Disorder Canadian Guidelines for Diagnosis. The per person cost of diagnosis was calculated based on the number of hours (estimated based on expert opinion) required by each specialist involved in the diagnostic process. The average rate per hour for each respective specialist was estimated based on hourly costs across Canada. Based on the existing clinical capacity of all FASD multidisciplinary clinics in Canada, obtained from the 2005 and 2011 surveys conducted by the Canada Northwest FASD Research Network, the number of FASD cases diagnosed per year in Canada was estimated. The per person cost of FASD diagnosis was then applied to the number of cases diagnosed per year in Canada in order to calculated the overall annual cost. Using the most conservative approach, it was estimated that an FASD evaluation requires 32 to 47 hours for one individual to be screened, referred, admitted, and diagnosed with an FASD diagnosis, which results in a total cost of $3,110 to $4,570 per person. The total cost of FASD diagnostic services in Canada ranges from $3.6 to $5.2 million (lower estimate), up to $5.0 to $7.3 million (upper estimate) per year. As a result of using the most conservative approach, the cost of FASD diagnostic services presented in the current study is most likely underestimated. The reasons for this likelihood and the limitations of the study are discussed.
Wind turbines: current status, obstacles, trends and technologies
NASA Astrophysics Data System (ADS)
Konstantinidis, E. I.; Botsaris, P. N.
2016-11-01
The last decade the installation of wind farms around the world is spreading rapidly and wind energy has become a significant factor for promoting sustainable development. The scope of the present study is to indicate the present status of global wind power expansion as well as the current state of the art in the field of wind turbine technology. The RAM (reliability/availability/maintenance) section is also examined and the Levelized Cost of Energy for onshore/ offshore electricity production is presented. Negative consequences that go with the rapid expansion of wind power like accidents, environmental effects, etc. are highlighted. Especially visual impact to the landscape and noise pollution are some factors that provoke social reactions. Moreover, the complicated and long permitted process of a wind power plant, the high capital cost of the investment and the grid instability due to the intermittent nature of wind, are also significant obstacles in the development of the wind energy production. The current trends in the field of research and development of onshore and offshore wind power production are analyzed. Finally the present study is trying to achieve an estimation of where the wind industry targets for the years to come.
Modeling Flare Hard X-ray Emission from Electrons in Contracting Magnetic Islands
NASA Astrophysics Data System (ADS)
Guidoni, Silvina E.; Allred, Joel C.; Alaoui, Meriem; Holman, Gordon D.; DeVore, C. Richard; Karpen, Judith T.
2016-05-01
The mechanism that accelerates particles to the energies required to produce the observed impulsive hard X-ray emission in solar flares is not well understood. It is generally accepted that this emission is produced by a non-thermal beam of electrons that collides with the ambient ions as the beam propagates from the top of a flare loop to its footpoints. Most current models that investigate this transport assume an injected beam with an initial energy spectrum inferred from observed hard X-ray spectra, usually a power law with a low-energy cutoff. In our previous work (Guidoni et al. 2016), we proposed an analytical method to estimate particle energy gain in contracting, large-scale, 2.5-dimensional magnetic islands, based on a kinetic model by Drake et al. (2010). We applied this method to sunward-moving islands formed high in the corona during fast reconnection in a simulated eruptive flare. The overarching purpose of the present work is to test this proposed acceleration model by estimating the hard X-ray flux resulting from its predicted accelerated-particle distribution functions. To do so, we have coupled our model to a unified computational framework that simulates the propagation of an injected beam as it deposits energy and momentum along its way (Allred et al. 2015). This framework includes the effects of radiative transfer and return currents, necessary to estimate flare emission that can be compared directly to observations. We will present preliminary results of the coupling between these models.
Inventory Data Package for Hanford Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kincaid, Charles T.; Eslinger, Paul W.; Aaberg, Rosanne L.
2006-06-01
This document presents the basis for a compilation of inventory for radioactive contaminants of interest by year for all potentially impactive waste sites on the Hanford Site for which inventory data exist in records or could be reasonably estimated. This document also includes discussions of the historical, current, and reasonably foreseeable (1944 to 2070) future radioactive waste and waste sites; the inventories of radionuclides that may have a potential for environmental impacts; a description of the method(s) for estimating inventories where records are inadequate; a description of the screening method(s) used to select those sites and contaminants that might makemore » a substantial contribution to impacts; a listing of the remedial actions and their completion dates for waste sites; and tables showing the best estimate inventories available for Hanford assessments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stetzel, KD; Aldrich, LL; Trimboli, MS
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less
A New Approach for Estimating Entrainment Rate in Cumulus Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu C.; Liu, Y.; Yum, S. S.
2012-02-16
A new approach is presented to estimate entrainment rate in cumulus clouds. The new approach is directly derived from the definition of fractional entrainment rate and relates it to mixing fraction and the height above cloud base. The results derived from the new approach compare favorably with those obtained with a commonly used approach, and have smaller uncertainty. This new approach has several advantages: it eliminates the need for in-cloud measurements of temperature and water vapor content, which are often problematic in current aircraft observations; it has the potential for straightforwardly connecting the estimation of entrainment rate and the microphysicalmore » effects of entrainment-mixing processes; it also has the potential for developing a remote sensing technique to infer entrainment rate.« less
Carey, Renee N; Hutchings, Sally J; Rushton, Lesley; Driscoll, Timothy R; Reid, Alison; Glass, Deborah C; Darcey, Ellie; Si, Si; Peters, Susan; Benke, Geza; Fritschi, Lin
2017-04-01
Studies in other countries have generally found approximately 4% of current cancers to be attributable to past occupational exposures. This study aimed to estimate the future burden of cancer resulting from current occupational exposures in Australia. The future excess fraction method was used to estimate the future burden of occupational cancer (2012-2094) among the proportion of the Australian working population who were exposed to occupational carcinogens in 2012. Calculations were conducted for 19 cancer types and 53 cancer-exposure pairings, assuming historical trends and current patterns continued to 2094. The cohort of 14.6 million Australians of working age in 2012 will develop an estimated 4.8 million cancers during their lifetime, of which 68,500 (1.4%) are attributable to occupational exposure in those exposed in 2012. The majority of these will be lung cancers (n=26,000), leukaemias (n=8000), and malignant mesotheliomas (n=7500). A significant proportion of future cancers will result from occupational exposures. This estimate is lower than previous estimates in the literature; however, our estimate is not directly comparable to past estimates of the occupational cancer burden because they describe different quantities - future cancers in currently exposed versus current cancers due to past exposures. The results of this study allow us to determine which current occupational exposures are most important, and where to target exposure prevention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
A new evaluation method of electron optical performance of high beam current probe forming systems.
Fujita, Shin; Shimoyama, Hiroshi
2005-10-01
A new numerical simulation method is presented for the electron optical property analysis of probe forming systems with point cathode guns such as cold field emitters and the Schottky emitters. It has long been recognized that the gun aberrations are important parameters to be considered since the intrinsically high brightness of the point cathode gun is reduced due to its spherical aberration. The simulation method can evaluate the 'threshold beam current I(th)' above which the apparent brightness starts to decrease from the intrinsic value. It is found that the threshold depends on the 'electron gun focal length' as well as on the spherical aberration of the gun. Formulas are presented to estimate the brightness reduction as a function of the beam current. The gun brightness reduction must be included when the probe property (the relation between the beam current l(b) and the probe size on the sample, d) of the entire electron optical column is evaluated. Formulas that explicitly consider the gun aberrations into account are presented. It is shown that the probe property curve consists of three segments in the order of increasing beam current: (i) the constant probe size region, (ii) the brightness limited region where the probe size increases as d approximately I(b)(3/8), and (iii) the angular current intensity limited region in which the beam size increases rapidly as d approximately I(b)(3/2). Some strategies are suggested to increase the threshold beam current and to extend the effective beam current range of the point cathode gun into micro ampere regime.
NASA Astrophysics Data System (ADS)
Liu, Xiliang; Lu, Feng; Zhang, Hengcai; Qiu, Peiyuan
2013-06-01
It is a pressing task to estimate the real-time travel time on road networks reliably in big cities, even though floating car data has been widely used to reflect the real traffic. Currently floating car data are mainly used to estimate the real-time traffic conditions on road segments, and has done little for turn delay estimation. However, turn delays on road intersections contribute significantly to the overall travel time on road networks in modern cities. In this paper, we present a technical framework to calculate the turn delays on road networks with float car data. First, the original floating car data collected with GPS equipped taxies was cleaned and matched to a street map with a distributed system based on Hadoop and MongoDB. Secondly, the refined trajectory data set was distributed among 96 time intervals (from 0: 00 to 23: 59). All of the intersections where the trajectories passed were connected with the trajectory segments, and constituted an experiment sample, while the intersections on arterial streets were specially selected to form another experiment sample. Thirdly, a principal curve-based algorithm was presented to estimate the turn delays at the given intersections. The algorithm argued is not only statistically fitted the real traffic conditions, but also is insensitive to data sparseness and missing data problems, which currently are almost inevitable with the widely used floating car data collecting technology. We adopted the floating car data collected from March to June in Beijing city in 2011, which contains more than 2.6 million trajectories generated from about 20000 GPS-equipped taxicabs and accounts for about 600 GB in data volume. The result shows the principal curve based algorithm we presented takes precedence over traditional methods, such as mean and median based approaches, and holds a higher estimation accuracy (about 10%-15% higher in RMSE), as well as reflecting the changing trend of traffic congestion. With the estimation result for the travel delay at intersections, we analyzed the spatio-temporal distribution of turn delays in three time scenarios (0: 00-0: 15, 8: 15-8: 30 and 12: 00-12: 15). It indicates that during one's single trip in Beijing, average 60% of the travel time on the road networks is wasted on the intersections, and this situation is even worse in daytime. Although the 400 main intersections take only 2.7% of all the intersections, they occupy about 18% travel time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schofield, J.T.
The field of environmental technology is poised for explosive growth throughout this decade and beyond. Currently, the worldwide market for environmental goods and services is estimated to be over $200 billion according to the Environmental business Journal (EBJ). The EBJ estimates that US environmental firms have approximately $120 billion in revenues. Additionally, California`s environmental industry accounts for 17% of the US revenues and 7.5% of worldwide revenues. This amounts to a $20 and 25 billion industry in California. According to a July/August report by the Environmental Export Report, the worldwide environmental industry is estimated to be between $200 and $300more » billion and growing at least at 6% annually to between $300 and $400 billion by the year 2,000. The EBJ also estimates that US exporters have the potential to capture $1 billion a year in Western Europe and about half of that in Japan. California`s high environmental standards present unparalleled opportunities for economic growth and job creation. As environmental awareness and standards are raised throughout the world, California is widely recognized to have tremendous economic advantages in capturing these lucrative markets. This industry provides over 180,000 jobs in California. Current estimates suggest that by the year 2000 as much as 3% of the nation`s gross national product will be devoted to protecting the environment, and that the industry will exceed $170 billion by 1996.« less
Semi-Supervised Novelty Detection with Adaptive Eigenbases, and Application to Radio Transients
NASA Technical Reports Server (NTRS)
Thompson, David R.; Majid, Walid A.; Reed, Colorado J.; Wagstaff, Kiri L.
2011-01-01
We present a semi-supervised online method for novelty detection and evaluate its performance for radio astronomy time series data. Our approach uses adaptive eigenbases to combine 1) prior knowledge about uninteresting signals with 2) online estimation of the current data properties to enable highly sensitive and precise detection of novel signals. We apply the method to the problem of detecting fast transient radio anomalies and compare it to current alternative algorithms. Tests based on observations from the Parkes Multibeam Survey show both effective detection of interesting rare events and robustness to known false alarm anomalies.
NASA Astrophysics Data System (ADS)
Maslovskaya, A. G.; Barabash, T. K.
2018-03-01
The paper presents the results of the fractal and multifractal analysis of polarization switching current in ferroelectrics under electron irradiation, which allows statistical memory effects to be estimated at dynamics of domain structure. The mathematical model of formation of electron beam-induced polarization current in ferroelectrics was suggested taking into account the fractal nature of domain structure dynamics. In order to realize the model the computational scheme was constructed using the numerical solution approximation of fractional differential equation. Evidences of electron beam-induced polarization switching process in ferroelectrics were specified at a variation of control model parameters.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation
French, Michael T.; Fang, Hai
2010-01-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107
Regionalising MUSLE factors for application to a data-scarce catchment
NASA Astrophysics Data System (ADS)
Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani
2018-04-01
The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.
The cost of crime to society: new crime-specific estimates for policy and program evaluation.
McCollister, Kathryn E; French, Michael T; Fang, Hai
2010-04-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Pearlstine, Leonard; Higer, Aaron; Palaseanu, Monica; Fujisaki, Ikuko; Mazzotti, Frank
2007-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on a 400-square-meter grid spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades.
New perspectives on the Popigai impact structure
NASA Technical Reports Server (NTRS)
Garvin, J. B.; Deino, A. L.
1992-01-01
The record of large-scale cratering on Earth is scant, and the only currently 'proven' 100-km-class impact structure known to have formed within the Cenozoic is Popigai, located in the Siberian Arctic at 71.5 deg N, 111 deg E. Popigai is clearly a multiringed impact basin formed within the crystalline shield rocks (Anabar) and platform sediments of the Siberian taiga, and estimates of the volume of preserved impact melt typically exceed 1700 cu km, which is within a factor of 2-3 of what would be predicted using scaling relationships. We present the preliminary results of an analysis of the present-day topography of the Popigai structure, together with refined absolute age estimates, in order to reconstruct the pre-erosional morphology of the basin, as well as to quantify the erosion or sediment infill rates in the Popigai region.
NASA Astrophysics Data System (ADS)
Nanni, E. A.; Graves, W. S.; Moncton, D. E.
2018-01-01
We present a new method for generation of relativistic electron beams with current modulation on the nanometer scale and below. The current modulation is produced by diffracting relativistic electrons in single crystal Si, accelerating the diffracted beam and imaging the crystal structure, then transferring the image into the temporal dimension via emittance exchange. The modulation period can be tuned by adjusting electron optics after diffraction. This tunable longitudinal modulation can have a period as short as a few angstroms, enabling production of coherent hard x-rays from a source based on inverse Compton scattering with total accelerator length of approximately ten meters. Electron beam simulations from cathode emission through diffraction, acceleration, and image formation with variable magnification are presented along with estimates of the coherent x-ray output properties.
Economic impacts of current-use assessment of rural land in the east Texas pineywoods region
Clifford A. Hickman; Kevin D. Crowther
1991-01-01
Those provisions of Texas law that authorize optional current-use property tax assessment for forest and other rural land were studied to: (1) estimate the extent of adoption by qualifying property owners, (2) estimate the effects on assessments and taxes of enrolled land, (3) estimate the impacts on revenues received by local units of government, (4) estimate the...
NASA Astrophysics Data System (ADS)
Ganguly, S.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Milesi, C.; Votava, P.; Nemani, R. R.
2013-12-01
An unresolved issue with coarse-to-medium resolution satellite-based forest carbon mapping over regional to continental scales is the high level of uncertainty in above ground biomass (AGB) estimates caused by the absence of forest cover information at a high enough spatial resolution (current spatial resolution is limited to 30-m). To put confidence in existing satellite-derived AGB density estimates, it is imperative to create continuous fields of tree cover at a sufficiently high resolution (e.g. 1-m) such that large uncertainties in forested area are reduced. The proposed work will provide means to reduce uncertainty in present satellite-derived AGB maps and Forest Inventory and Analysis (FIA) based regional estimates. Our primary objective will be to create Very High Resolution (VHR) estimates of tree cover at a spatial resolution of 1-m for the Continental United States using all available National Agriculture Imaging Program (NAIP) color-infrared imagery from 2010 till 2012. We will leverage the existing capabilities of the NASA Earth Exchange (NEX) high performance computing and storage facilities. The proposed 1-m tree cover map can be further aggregated to provide percent tree cover at any medium-to-coarse resolution spatial grid, which will aid in reducing uncertainties in AGB density estimation at the respective grid and overcome current limitations imposed by medium-to-coarse resolution land cover maps. We have implemented a scalable and computationally-efficient parallelized framework for tree-cover delineation - the core components of the algorithm [that] include a feature extraction process, a Statistical Region Merging image segmentation algorithm and a classification algorithm based on Deep Belief Network and a Feedforward Backpropagation Neural Network algorithm. An initial pilot exercise has been performed over the state of California (~11,000 scenes) to create a wall-to-wall 1-m tree cover map and the classification accuracy has been assessed. Results show an improvement in accuracy of tree-cover delineation as compared to existing forest cover maps from NLCD, especially over fragmented, heterogeneous and urban landscapes. Estimates of VHR tree cover will complement and enhance the accuracy of present remote-sensing based AGB modeling approaches and forest inventory based estimates at both national and local scales. A requisite step will be to characterize the inherent uncertainties in tree cover estimates and propagate them to estimate AGB.
NASA Astrophysics Data System (ADS)
Zaitsev, V. V.; Stepanov, A. V.
2017-10-01
A mechanism of electron acceleration and storage of energetic particles in solar and stellar coronal magnetic loops, based on oscillations of the electric current, is considered. The magnetic loop is presented as an electric circuit with the electric current generated by convective motions in the photosphere. Eigenoscillations of the electric current in a loop induce an electric field directed along the loop axis. It is shown that the sudden reductions that occur in the course of type IV continuum and pulsating type III observed in various frequency bands (25 - 180 MHz, 110 - 600 MHz, 0.7 - 3.0 GHz) in solar flares provide evidence for acceleration and storage of the energetic electrons in coronal magnetic loops. We estimate the energization rate and the energy of accelerated electrons and present examples of the storage of energetic electrons in loops in the course of flares on the Sun or on ultracool stars. We also discuss the efficiency of the suggested mechanism as compared with the electron acceleration during the five-minute photospheric oscillations and with the acceleration driven by the magnetic Rayleigh-Taylor instability.
Welding current and melting rate in GMAW of aluminium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandey, S.; Rao, U.R.K.; Aghakhani, M.
1996-12-31
Studies on GMAW of aluminium and its alloy 5083, revealed that the welding current and melting rate were affected by any change in wire feed rate, arc voltage, nozzle to plate distance, welding speed and torch angle. Empirical models have been presented to determine accurately the welding current and melting rate for any set of these parameters. These results can be utilized for determining accurately the heat input into the workpiece from which reliable predictions can be made about the mechanical and the metallurgical properties of a welded joint. The analysis of the model also helps in providing a vitalmore » information about the static V-I characteristics of the welding power source. The models were developed using a two-level fractional factorial design. The adequacy of the model was tested by the use of analysis of variance technique and the significance of the coefficients was tested by the student`s t test. The estimated and observed values of the welding current and melting rate have been shown on a scatter diagram and the interaction effects of different parameters involved have been presented in graphical forms.« less
Mental disorders in Italian prisoners: results of the REDiMe study.
Macciò, Annalisa; Meloni, Francesca Romana; Sisti, Davide; Rocchi, Marco Bruno Luigi; Petretto, Donatella Rita; Masala, Carmelo; Preti, Antonio
2015-02-28
The goal of the study was to estimate the prevalence of current and lifetime mental disorders in a consecutive sample (n=300) of detainees and prison inmates held in an Italian prison and compare it with the prevalence observed in a sample randomized from the community (n=300) within the same age interval (18-55 years) and sex proportion of prisoners, and with a similar socio-economic status. Psychiatric disorders were identified with the Mini International Neuropsychiatric Interview (MINI). Current psychiatric disorders were present in 58.7% of prisoners and 8.7% of the comparison group. Lifetime psychiatric disorders were present in 88.7% of prisoners and 15.7% of the comparison group. Current anxiety disorders and current stress-related disorders were related to prisoners serving their first-ever prison sentence. A variable fraction of prisoners with an ongoing psychopathology is not diagnosed or does not receive proper treatment. The provision of effective treatment to prisoners with psychiatric disorders might have potentially substantial public health benefits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Bloodgood, Dale V.
1998-01-01
This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.
NASA Astrophysics Data System (ADS)
Xu, Jinghai; An, Jiwen; Nie, Gaozong
2016-04-01
Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Araujo, Marcelo Guimaraes, E-mail: marcel_g@uol.com.br; Magrini, Alessandra; Mahler, Claudio Fernando
2012-02-15
Highlights: Black-Right-Pointing-Pointer Literature of WEEE generation in developing countries is reviewed. Black-Right-Pointing-Pointer We analyse existing estimates of WEEE generation for Brazil. Black-Right-Pointing-Pointer We present a model for WEEE generation estimate. Black-Right-Pointing-Pointer WEEE generation of 3.77 kg/capita year for 2008 is estimated. Black-Right-Pointing-Pointer Use of constant lifetime should be avoided for non-mature market products. - Abstract: Sales of electrical and electronic equipment are increasing dramatically in developing countries. Usually, there are no reliable data about quantities of the waste generated. A new law for solid waste management was enacted in Brazil in 2010, and the infrastructure to treat this waste mustmore » be planned, considering the volumes of the different types of electrical and electronic equipment generated. This paper reviews the literature regarding estimation of waste electrical and electronic equipment (WEEE), focusing on developing countries, particularly in Latin America. It briefly describes the current WEEE system in Brazil and presents an updated estimate of generation of WEEE. Considering the limited available data in Brazil, a model for WEEE generation estimation is proposed in which different methods are used for mature and non-mature market products. The results showed that the most important variable is the equipment lifetime, which requires a thorough understanding of consumer behavior to estimate. Since Brazil is a rapidly expanding market, the 'boom' in waste generation is still to come. In the near future, better data will provide more reliable estimation of waste generation and a clearer interpretation of the lifetime variable throughout the years.« less
Defining Tsunami Magnitude as Measure of Potential Impact
NASA Astrophysics Data System (ADS)
Titov, V. V.; Tang, L.
2016-12-01
The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.
Approaches to Refining Estimates of Global Burden and Economics of Dengue
Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.
2014-01-01
Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions. PMID:25412506
Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region
NASA Astrophysics Data System (ADS)
Choi, P. H.; Bang, E.; Lee, J.
2016-12-01
Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.
Approaches to refining estimates of global burden and economics of dengue.
Shepard, Donald S; Undurraga, Eduardo A; Betancourt-Cravioto, Miguel; Guzmán, María G; Halstead, Scott B; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O; Tapia-Conyer, Roberto; Gubler, Duane J
2014-11-01
Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions.
Modelling the effect on injuries and fatalities when changing mode of transport from car to bicycle.
Nilsson, Philip; Stigson, Helena; Ohlin, Maria; Strandroth, Johan
2017-03-01
Several studies have estimated the health effects of active commuting, where a transport mode shift from car to bicycle reduces risk of mortality and morbidity. Previous studies mainly assess the negative aspects of bicycling by referring to fatalities or police reported injuries. However, most bicycle crashes are not reported by the police and therefore hospital reported data would cover a much higher rate of injuries from bicycle crashes. The aim of the present study was to estimate the effect on injuries and fatalities from traffic crashes when shifting mode of transport from car to bicycle by using hospital reported data. This present study models the change in number of injuries and fatalities due to a transport mode change using a given flow change from car to bicycle and current injury and fatality risk per distance for bicyclists and car occupants. show that bicyclists have a much higher injury risk (29 times) and fatality risk (10 times) than car occupants. In a scenario where car occupants in Stockholm living close to their work place shifts transport mode to bicycling, injuries, fatalities and health loss expressed in Disability-Adjusted Life Years (DALY) were estimated to increase. The vast majority of the estimated DALY increase was caused by severe injuries and fatalities and it tends to fluctuate so that the number of severe crashes may exceed the estimation with a large margin. Although the estimated increase of traffic crashes and DALY, a transport mode shift is seen as a way towards a more sustainable society. Thus, this present study highlights the need of strategic preventive measures in order to minimize the negative impacts from increased bicycling. Copyright © 2016 Elsevier Ltd. All rights reserved.
de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo
2014-09-05
This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors.
de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo
2014-01-01
This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors. PMID:25198007
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Global Positioning System III (GPS III)
2015-12-01
Vacuum (TVAC) testing on October 12, 2015, and successfully completed baseline TVAC testing on December 23, 2015 – a major system- level event...0.0 0.0 Total 4142.9 5285.2 N/A 5180.4 4269.8 5650.1 5557.4 Current APB Cost Estimate Reference SCP dated July 02, 2015 Confidence Level Confidence... Level of cost estimate for current APB: 60% The current APB is established at the 60% confidence level . This estimate is built upon the February 2015
Guareschi, Simone; Coccia, Cristina; Sánchez-Fernández, David; Carbonell, José Antonio; Velasco, Josefa; Boyero, Luz; Green, Andy J.; Millán, Andrés
2013-01-01
Invasions of alien species are considered among the least reversible human impacts, with diversified effects on aquatic ecosystems. Since prevention is the most cost-effective way to avoid biodiversity loss and ecosystem problems, one challenge in ecological research is to understand the limits of the fundamental niche of the species in order to estimate how far invasive species could spread. Trichocorixa verticalis verticalis (Tvv) is a corixid (Hemiptera) originally distributed in North America, but cited as an alien species in three continents. Its impact on native communities is under study, but it is already the dominant species in several saline wetlands and represents a rare example of an aquatic alien insect. This study aims: i) to estimate areas with suitable environmental conditions for Tvv at a global scale, thus identifying potential new zones of invasion; and ii) to test possible changes in this global potential distribution under a climate change scenario. Potential distributions were estimated by applying a multidimensional envelope procedure based on both climatic data, obtained from observed occurrences, and thermal physiological data. Our results suggest Tvv may expand well beyond its current range and find inhabitable conditions in temperate areas along a wide range of latitudes, with an emphasis on coastal areas of Europe, Northern Africa, Argentina, Uruguay, Australia, New Zealand, Myanmar, India, the western boundary between USA and Canada, and areas of the Arabian Peninsula. When considering a future climatic scenario, the suitability area of Tvv showed only limited changes compared with the current potential distribution. These results allow detection of potential contact zones among currently colonized areas and potential areas of invasion. We also identified zones with a high level of suitability that overlap with areas recognized as global hotspots of biodiversity. Finally, we present hypotheses about possible means of spread, focusing on different geographical scales. PMID:23555771
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
Integrating legal liabilities in nanomanufacturing risk management.
Mohan, Mayank; Trump, Benjamin D; Bates, Matthew E; Monica, John C; Linkov, Igor
2012-08-07
Among other things, the wide-scale development and use of nanomaterials is expected to produce costly regulatory and civil liabilities for nanomanufacturers due to lingering uncertainties, unanticipated effects, and potential toxicity. The life-cycle environmental, health, and safety (EHS) risks of nanomaterials are currently being studied, but the corresponding legal risks have not been systematically addressed. With the aid of a systematic approach that holistically evaluates and accounts for uncertainties about the inherent properties of nanomaterials, it is possible to provide an order of magnitude estimate of liability risks from regulatory and litigious sources based on current knowledge. In this work, we present a conceptual framework for integrating estimated legal liabilities with EHS risks across nanomaterial life-cycle stages using empirical knowledge in the field, scientific and legal judgment, probabilistic risk assessment, and multicriteria decision analysis. Such estimates will provide investors and operators with a basis to compare different technologies and practices and will also inform regulatory and legislative bodies in determining standards that balance risks with technical advancement. We illustrate the framework through the hypothetical case of a manufacturer of nanoscale titanium dioxide and use the resulting expected legal costs to evaluate alternative risk-management actions.
Physical installation of Pelletron and electron cooling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurh, P.
1997-09-01
Bremsstrahlung of 5 MeV electrons at a loss current of 50 microamp in the acceleration region is estimated to produce X-ray intensities of 7 Rad/sec. Radiation losses due to a misteer or sudden obstruction will of course be much higher still (estimated at 87,500 Rad/hr for a 0.5 mA beam current). It is estimated that 1.8 meters of concrete will be necessary to adequately shield the surrounding building areas at any possible Pelletron installation site. To satisfy our present electron cooling development plan, two Pelletron installations are required, the first at our development lab in the Lab B/NEF Enclosure areamore » and the second at the operational Main Injector service building, MI-30, in the main Injector ring. The same actual Pelletron and electron beam-line components will be used at both locations. The Lab B installation will allow experimentation with actual high energy electron beam to develop the optics necessary for the cooling straight while Main Injector/Recycler commissioning is taking place. The MI-30 installation is obviously the permanent home for the Pelletron when electron cooling becomes operational. Construction plans for both installations will be discussed here.« less
Breast cancer risk from different mammography screening practices.
Bijwaard, Harmen; Brenner, Alina; Dekkers, Fieke; van Dillen, Teun; Land, Charles E; Boice, John D
2010-09-01
Mammography screening is an accepted procedure for early detection of breast tumors among asymptomatic women. Since this procedure involves the use of X rays, it is itself potentially carcinogenic. Although there is general consensus about the benefit of screening for older women, screening practices differ between countries. In this paper radiation risks for these different practices are estimated using a new approach. We model breast cancer induction by ionizing radiation in a cohort of patients exposed to frequent X-ray examinations. The biologically based, mechanistic model provides a better foundation for the extrapolation of risks to different mammography screening practices than empirical models do. The model predicts that the excess relative risk (ERR) doubles when screening starts at age 40 instead of 50 and that a continuation of screening at ages 75 and higher carries little extra risk. The number of induced fatal breast cancers is estimated to be considerably lower than derived from epidemiological studies and from internationally accepted radiation protection risks. The present findings, if used in a risk-benefit analysis for mammography screening, would be more favorable to screening than estimates currently recommended for radiation protection. This has implications for the screening ages that are currently being reconsidered in several countries.
NASA Astrophysics Data System (ADS)
Lepping, R. P.; Wu, C.-C.; Berdichevsky, D. B.; Szabo, A.
2018-04-01
We give the results of parameter fitting of the magnetic clouds (MCs) observed by the Wind spacecraft for the three-year period 2013 to the end of 2015 (called the "Present" period) using the MC model of Lepping, Jones, and Burlaga ( J. Geophys. Res. 95, 11957, 1990). The Present period is almost coincident with the solar maximum of the sunspot number, which has a broad peak starting in about 2012 and extending to almost 2015. There were 49 MCs identified in the Present period. The modeling gives MC quantities such as size, axial attitude, field handedness, axial magnetic-field strength, center time, and closest-approach vector. Derived quantities are also estimated, such as axial magnetic flux, axial current density, and total axial current. Quality estimates are assigned representing excellent, fair/good, and poor. We provide error estimates on the specific fit parameters for the individual MCs, where the poor cases are excluded. Model-fitting results that are based on the Present period are compared to the results of the full Wind mission from 1995 to the end of 2015 (Long-term period), and compared to the results of two other recent studies that encompassed the periods 2007 - 2009 and 2010 - 2012, inclusive. We see that during the Present period, the MCs are, on average, slightly slower, slightly weaker in axial magnetic field (by 8.7%), and larger in diameter (by 6.5%) than those in the Long-term period. However, in most respects, the MCs in the Present period are significantly closer in characteristics to those of the Long-term period than to those of the two recent three-year periods. However, the rate of occurrence of MCs for the Long-term period is 10.3 year^{-1}, whereas this rate for the Present period is 16.3 year^{-1}, similar to that of the period 2010 - 2012. Hence, the MC occurrence rate has increased appreciably in the last six years. MC Type (N-S, S-N, All N, All S, etc.) is assigned to each MC; there is an inordinately large percentage of All S, by about a factor of two compared to that of the Long-term period, indicating many strongly tipped MCs. In 2005, there was a distinct change in variability and average value (viewed at 1/2 year averages) of the duration, MC speed, axial magnetic field strength, axial magnetic flux, and total current to lower values. In the Present period, upstream shocks occur for 43% of the 49 cases; for comparison, the Long-term rate is 56%.
New insights on the seismic hazard in the Balkans inferred from GPS
NASA Astrophysics Data System (ADS)
D'Agostino, Nicola; Métois, Marianne; Avallone, Antonio; Chamot-Rooke, Nicolas
2014-05-01
The Balkans region sits at the transition between stable Eurasia and highly straining continental Eastern Mediterranean, resulting in a widespread seismicity and high seismic hazard. Because of intensive human and economic development over the last decades, the vulnerability has increased in the region faster than the progress in seismic hazard assessments. Opposite to the relatively good understanding of the seismicity in plate boundaries contexts, the seismic hazard is poorly known in the regions of distributed continental deformation like the Balkan region and is often underestimated (England and Jackson, 2011). Current seismic hazard assessments are based on the historical and instrumental catalogues. However, the completeness interval of the historical data bases may be below the average recurrence of individual seismogenic structures. In addition, relatively sparse seismological networks in the region and limited cross-border seismic data exchanges cast doubts in seismotectonic interpretation and challenge our understanding of seismic and geodynamic processes. This results in a inhomogeneous knowledge of the seismic hazard of the region to date. Geodetic measurements have the capability to contribute to seismic hazard by mapping the field of current active deformation and translating it into estimates of the seismogenic potential. With simple assumptions, measurements of crustal deformation can be translated in estimates of the average frequency and magnitude of the largest events and assessments of the aseismic deformation. GPS networks in the Balkans have been growing during the last few years mainly for civilian application (e.g. Cadastral plan, telecommunications), but opening new opportunities to quantify the present-day rates of crustal deformation. Here we present the initial results of GEOSAB (Geodetic Estimate of Strain Accumulation over Balkans), an AXA-Research-Fund supported project devoted to the estimation of crustal deformation and the associated seismic hazard of the Balkan region. We processed all the currently available data acquired on these new networks using the precise point positioning strategy of the Gipsy-Oasis software (Bertiger et al. 2010) and the daily ITF2008 transformation parameters (x-files) from JPL. Daily coordinates are obtained in a Eurasia-fix reference frame obtained using the strategy developed by Blewitt et al. (2012). Here we present this new velocity field combined with previously published data sets covering the Balkan Peninsula. This unusually dense picture of the current deformation, in particular in Slovenia and Serbia, enables us to derive a continuous map of the strain rate over the region using the approach of Haines and Holt (1993). We then derive the seismogenic potential of the region combining the geodetic strain rate and the available regional CMT moment tensor solutions. These maps bring new insights on areas of significant strain accumulation over the Balkan Peninsula and are a first step to better assess seismic hazard there.
Kramers-Kronig relations in Laser Intensity Modulation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuncer, Enis
2006-01-01
In this short paper, the Kramers-Kronig relations for the Laser Intensity Modulation Method (LIMM) are presented to check the self-consistency of experimentally obtained complex current densities. The numerical procedure yields well defined, precise estimates for the real and the imaginary parts of the LIMM current density calculated from its imaginary and real parts, respectively. The procedure also determines an accurate high frequency real current value which appears to be an intrinsic material parameter similar to that of the dielectric permittivity at optical frequencies. Note that the problem considered here couples two different material properties, thermal and electrical, consequently the validitymore » of the Kramers-Kronig relation indicates that the problem is invariant and linear.« less
Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...
2010-12-08
We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less
Gumbricht, Thomas; Roman-Cuesta, Rosa Maria; Verchot, Louis; Herold, Martin; Wittmann, Florian; Householder, Ethan; Herold, Nadine; Murdiyarso, Daniel
2017-09-01
Wetlands are important providers of ecosystem services and key regulators of climate change. They positively contribute to global warming through their greenhouse gas emissions, and negatively through the accumulation of organic material in histosols, particularly in peatlands. Our understanding of wetlands' services is currently constrained by limited knowledge on their distribution, extent, volume, interannual flood variability and disturbance levels. We present an expert system approach to estimate wetland and peatland areas, depths and volumes, which relies on three biophysical indices related to wetland and peat formation: (1) long-term water supply exceeding atmospheric water demand; (2) annually or seasonally water-logged soils; and (3) a geomorphological position where water is supplied and retained. Tropical and subtropical wetlands estimates reach 4.7 million km 2 (Mkm 2 ). In line with current understanding, the American continent is the major contributor (45%), and Brazil, with its Amazonian interfluvial region, contains the largest tropical wetland area (800,720 km 2 ). Our model suggests, however, unprecedented extents and volumes of peatland in the tropics (1.7 Mkm 2 and 7,268 (6,076-7,368) km 3 ), which more than threefold current estimates. Unlike current understanding, our estimates suggest that South America and not Asia contributes the most to tropical peatland area and volume (ca. 44% for both) partly related to some yet unaccounted extended deep deposits but mainly to extended but shallow peat in the Amazon Basin. Brazil leads the peatland area and volume contribution. Asia hosts 38% of both tropical peat area and volume with Indonesia as the main regional contributor and still the holder of the deepest and most extended peat areas in the tropics. Africa hosts more peat than previously reported but climatic and topographic contexts leave it as the least peat-forming continent. Our results suggest large biases in our current understanding of the distribution, area and volumes of tropical peat and their continental contributions. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Griffiths, Thomas L.; Tenenbaum, Joshua B.
2011-01-01
Predicting the future is a basic problem that people have to solve every day and a component of planning, decision making, memory, and causal reasoning. In this article, we present 5 experiments testing a Bayesian model of predicting the duration or extent of phenomena from their current state. This Bayesian model indicates how people should…
The October 1973 expendable launch vehicle traffic model, revision 2
NASA Technical Reports Server (NTRS)
1974-01-01
Traffic model data for current expendable launch vehicles (assuming no space shuttle) for calendar years 1980 through 1991 are presented along with some supporting and summary data. This model was based on a payload program equivalent in scientific return to the October 1973 NASA Payload Model, the NASA estimated non NASA/non DoD Payload Model, and the 1971 DoD Mission Model.
Sonja N. Oswalt; W. Brad Smith; Patrick D. Miles; Scott A. Pugh
2014-01-01
Forest resource statistics from the 2010 Resources Planning Act (RPA) Assessment were updated to provide current information on the Nation's forests as a baseline for the 2015 national assessment. Resource tables present estimates of forest area, volume, mortality, growth, removals, and timber products output in various ways, such as by ownership, region, or State...
Growth and Yield Predictions for Thinned Stands of Even-aged Natural Longleaf Pine
Robert M. Farrar
1979-01-01
This paper presents a system of equations and resulting tables that can predict stand volumes for thinned natural longleaf pine. The system can predict current and future total stand volume in cubic feet and merchantable stand volume in cubic feet, cords, and board feet. The system also provides for estimating dry-weight production of wood. The system uses input data...
NASA Technical Reports Server (NTRS)
Lummus, J. R.; Joyce, G. T.; Omalley, C. D.
1980-01-01
An evaluation of current prediction methodologies to estimate the aerodynamic uncertainties identified for the E205 configuration is presented. This evaluation was accomplished by comparing predicted and wind tunnel test data in three major categories: untrimmed longitudinal aerodynamics; trimmed longitudinal aerodynamics; and lateral-directional aerodynamic characteristics.
Carbon stocks on forestland of the United States, with emphasis on USDA Forest Service ownership
Linda S. Heath; James E. Smith; Christopher W. Woodall; David L. Azuma; Karen L. Waddell
2011-01-01
The U.S. Department of Agriculture Forest Service (USFS) manages one-fifth of the area of forestland in the United States. The Forest Service Roadmap for responding to climate change identified assessing and managing carbon stocks and change as a major element of its plan. This study presents methods and results of estimating current forest carbon stocks and change in...
Carbon stocks on forestland of the United States, with emphaisis on USDA Forest Service ownership
Linda S. Heath; James E. Smith; Christopher W. Woodall; Dave Azuma; Karen L. Waddell
2011-01-01
The U.S. Department of Agriculture Forest Service (USFS) manages one-fifth of the area of forestland in the United States. The Forest Service Roadmap for responding to climate change identified assessing and managing carbon stocks and change as a major element of its plan. This study presents methods and results of estimating current forest carbon stocks and change in...
NASA Technical Reports Server (NTRS)
1977-01-01
Multiple access techniques (FDMA, CDMA, TDMA) for the mobile user and attempts to identify the current best technique are discussed. Traffic loading is considered as well as voice and data modulation and spacecraft and system design. Emphasis is placed on developing mobile terminal cost estimates for the selected design. In addition, design examples are presented for the alternative techniques of multiple access in order to compare with the selected technique.
Don Minore; Donald R. Gedney
1960-01-01
A large proportion of present-day timber cruising is done by measuring or estimating three tree dimensions: diameter at breast height, form class, and merchantable height. Tree volumes are then determined from tables which equate volume to the varying combinations of height, d.b.h., and form class. Assumptions concerning merchantable height were made in constructing...
Projecting the climatic effects of increasing carbon dioxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCracken, M C; Luther, F M
This report presents the current knowns, unknowns, and uncertainties regarding the projected climate changes that might occur as a result of an increasing atmospheric CO/sub 2/ concentration. Further, the volume describes what research is required to estimate the magnitude and rate of a CO/sub 2/-induced clamate change with regional and seasonal resolution. Separate abstracts have been prepared for the individual papers. (ACR)
The Observational Determination of the Primordial Helium Abundance: a Y2K Status Report
NASA Astrophysics Data System (ADS)
Skillman, Evan D.
I review observational progress and assess the current state of the determination of the primordial helium abundance, Yp. At present there are two determinations with non-overlapping errors. My impression is that the errors have been under-estimated in both studies. I review recent work on errors assessment and give suggestions for decreasing systematic errors in future studies.
The role of harvest residue in rotation cycle carbon balance in loblolly pine plantations
Asko Noormets; Steve G. Mcnulty; Jean-Christophe Domec; Michael Gavazzi; Ge Sun; John S. King
2012-01-01
Timber harvests remove a significant portion of ecosystem carbon. While some of the wood products moved off-site may last past the harvest cycle of the particular forest crop, the effect of the episodic disturbances on long-term on-site carbon sequestration is unclear. The current study presents a 25 year carbon budget estimate for a typical commercial loblolly pine...
Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Fang, L.
2018-02-01
Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.
Population estimates of Nearctic shorebirds
Morrison, R.I.G.; Gill, Robert E.; Harrington, B.A.; Skagen, S.K.; Page, G.W.; Gratto-Trevor, C. L.; Haig, S.M.
2000-01-01
Estimates are presented for the population sizes of 53 species of Nearctic shorebirds occurring regularly in North America, plus four species that breed occasionally. Shorebird population sizes were derived from data obtained by a variety of methods from breeding, migration and wintering areas, and formal assessments of accuracy of counts or estimates are rarely available. Accurate estimates exist only for a few species that have been the subject of detailed investigation, and the likely accuracy of most estimates is considered poor or low. Population estimates range from a few tens to several millions. Overall, population estimates most commonly fell in the range of hundreds of thousands, particularly the low hundreds of thousands; estimated population sizes for large shorebird species currently all fall below 500,000. Population size was inversely related to size (mass) of the species, with a statistically significant negative regression between log (population size) and log (mass). Two outlying groups were evident on the regression graph: one, with populations lower than predicted, included species considered either to be "at risk" or particularly hard to count, and a second, with populations higher than predicted, included two species that are hunted. Population estimates are an integral part of conservation plans being developed for shorebirds in the United States and Canada, and may be used to identify areas of key international and regional importance.
Estimates of shorebird populations in North America
Morrison, R.I.G.; Gill, Robert E.; Harrington, B.A.; Skagen, S.K.; Page, G.W.; Gratto-Trevor, C. L.; Haig, S.M.
2001-01-01
Estimates are presented for the population sizes of 53 species of Nearctic shorebirds occurring regularly in North America, plus four species that breed occasionally. Population estimates range from a few tens to several millions. Overall, population estimates most commonly fall in the range of hundreds of thousands, particularly the low hundreds of thousands; estimated population sizes for large shorebird species currently all fall below 500 000. Population size is inversely related to size (mass) of the species, with a statistically significant negative regression between log(population size) and log(mass). Two outlying groups are evident on the regression graph: one, with populations lower than predicted, includes species considered to be either “at risk” or particularly hard to count, and a second, with populations higher than predicted, includes two species that are hunted. Shorebird population sizes were derived from data obtained by a variety of methods from breeding, migration, and wintering areas, and formal assessments of accuracy of counts or estimates are rarely available. Accurate estimates exist only for a few species that have been the subject of detailed investigation, and the likely accuracy of most estimates is considered poor or low. Population estimates are an integral part of conservation plans being developed for shorebirds in the United States and Canada and may be used to identify areas of key international and regional importance.
A new method for constructing networks from binary data
NASA Astrophysics Data System (ADS)
van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.
2014-08-01
Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.
A Review of Global Precipitation Data Sets: Data Sources, Estimation, and Intercomparisons
NASA Astrophysics Data System (ADS)
Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin
2018-03-01
In this paper, we present a comprehensive review of the data sources and estimation methods of 30 currently available global precipitation data sets, including gauge-based, satellite-related, and reanalysis data sets. We analyzed the discrepancies between the data sets from daily to annual timescales and found large differences in both the magnitude and the variability of precipitation estimates. The magnitude of annual precipitation estimates over global land deviated by as much as 300 mm/yr among the products. Reanalysis data sets had a larger degree of variability than the other types of data sets. The degree of variability in precipitation estimates also varied by region. Large differences in annual and seasonal estimates were found in tropical oceans, complex mountain areas, northern Africa, and some high-latitude regions. Overall, the variability associated with extreme precipitation estimates was slightly greater at lower latitudes than at higher latitudes. The reliability of precipitation data sets is mainly limited by the number and spatial coverage of surface stations, the satellite algorithms, and the data assimilation models. The inconsistencies described limit the capability of the products for climate monitoring, attribution, and model validation.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.
Manjón, José V; Tohka, Jussi; Robles, Montserrat
2010-11-01
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.