Sample records for estimated maximum potential

  1. Photovoltaic-Model-Based Solar Irradiance Estimators: Performance Comparison and Application to Maximum Power Forecasting

    NASA Astrophysics Data System (ADS)

    Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario

    2018-01-01

    Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.

  2. Carpooling: status and potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kendall, D.C.

    1975-06-01

    Studies were conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented together with analyses revealing characteristics of carpool trips, incentives, impacts of increased carpooling and issues related to carpool matching services. National survey results indicate the average auto occupancy for urban work-trip is 1.2 passengers per auto. This value, and average carpool occupancy of 2.5, have been relatively stable over the last five years. An increase in work-trip occupancy from 1.2 to 1.8 would require a 100% increasemore » in the number of carpoolers. A model was developed to predict the maximum potential level of carpooling in an urban area. Results from applying the model to the Boston region were extrapolated to estimate a maximum nationwide potential between 47 and 71% of peak period auto commuters. Maximum benefits of increased carpooling include up to 10% savings in auto fuel consumption. A technique was developed for estimating the number of participants required in a carpool matching service to achieve a chosen level of matching among respondents, providing insight into tradeoffs between employer and regional or centralized matching services. Issues recommended for future study include incentive policies and their impacts on other modes, and the evaluation of new and ongoing carpool matching services. (11 references) (GRA)« less

  3. Estimating the maximum potential revenue for grid connected electricity storage :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participatingmore » in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.« less

  4. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  5. Comparing fishers' and scientific estimates of size at maturity and maximum body size as indicators for overfishing.

    PubMed

    Mclean, Elizabeth L; Forrester, Graham E

    2018-04-01

    We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.

  6. 40 CFR 60.46c - Emission monitoring for sulfur dioxide.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... potential SO2 emission rate of the fuel combusted, and the span value of the SO2 CEMS at the outlet from the SO2 control device shall be 50 percent of the maximum estimated hourly potential SO2 emission rate of... estimated hourly potential SO2 emission rate of the fuel combusted. (d) As an alternative to operating a...

  7. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  8. Carpooling : Status and Potential

    DOT National Transportation Integrated Search

    1975-06-01

    The report contains the findings of studies conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented...

  9. FY 17 Q1 Commercial integrated heat pump with thermal storage milestone report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Heiba, Ahmad; Baxter, Van D.; Shen, Bo

    2017-01-01

    The commercial integrated heat pump with thermal storage (AS-IHP) offers significant energy saving over a baseline heat pump with electric water heater. The saving potential is maximized when the AS-IHP serves coincident high water heating and high space cooling demands. A previous energy performance analysis showed that the AS-IHP provides the highest benefit in the hot-humid and hot-dry/mixed dry climate regions. Analysis of technical potential energy savings for these climate zones based on the BTO Market calculator indicated that the following commercial building market segments had the highest water heating loads relative to space cooling and heating loads education, foodmore » service, health care, lodging, and mercantile/service. In this study, we focused on these building types to conservatively estimate the market potential of the AS-IHP. Our analysis estimates maximum annual shipments of ~522,000 units assuming 100% of the total market is captured. An early replacement market based on replacement of systems in target buildings between 15 and 35 years old was estimated at ~136,000 units. Technical potential energy savings are estimated at ~0.27 quad based on the maximum market estimate, equivalent to ~13.9 MM Ton CO2 emissions reduction.« less

  10. The role of misclassification in estimating proportions and an estimator of misclassification probability

    Treesearch

    Patrick L. Zimmerman; Greg C. Liknes

    2010-01-01

    Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...

  11. Assessment of Potential Health Hazards During Emission of Hydrogen Sulphide from the Mine Exploiting Copper Ore Deposit - Case Study.

    PubMed

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Gromiec, Jan P; Konieczko, Katarzyna

    2015-06-01

    The aim of this study was to determine hydrogen sulphide concentration emitted from the mine extracting copper ore, to evaluate potential adverse health effects to the population living in four selected villages surrounding the exhaust shaft. Maximum measured concentration of hydrogen sulphide in the emitter is 286 µg/m³. Maximum emission calculated from the results of determinations of concentrations in the emitter is 0.44 kg/h. In selected villages hydrogen sulphide at concentrations exceeding 4 µg/m³ was not detected in any of the 5-hour air samples. In all locations, the estimated maximum 1-hour concentrations of hydrogen sulphide were below 1 µg/m³, and the estimated mean annual concentrations were below 0.53 µg/m³. Any risk to the health of people in the selected area is not expected. As indicated by the available data on the threshold odour, the estimated concentrations of hydrogen sulphide may be sensed by humans. Copyright© by the National Institute of Public Health, Prague 2015.

  12. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  13. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods overpredict maximum magnitude for this area or that long time delays are required for sufficient pore-pressure diffusion to occur to cause rupture along an entire fault segment. We note that earthquake clusters can initiate and grow rapidly over the course of 1 or 2 yr, thus making it difficult to predict maximum earthquake magnitudes far into the future. The abrupt onset of seismicity with injection indicates that pore-pressure increases near the well have been sufficient to trigger earthquakes under pre-existing tectonic stresses. However, we do not observe remote triggering from large teleseismic earthquakes, which suggests that the stress perturbations generated from those events are too small to trigger rupture, even with the increased pore pressures.

  14. Comparison of irrigation pumpage with change in ground-water storage in the High Plains aquifer in Chase, Dundy, and Perkins counties, Nebraska, 1975-83

    USGS Publications Warehouse

    Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.

    1987-01-01

    The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)

  15. Tropical Africa: Land use, biomass, and carbon estimates for 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.; Gaston, G.; Daniels, R.C.

    1996-06-01

    This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980 and describes a methodology that may be used to extend this data set to 1990 and beyond based on population and land cover data. The biomass data and carbon estimates are for woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with the possible magnitude of historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth`s landmore » surface and includes those countries that for the most part are located in Tropical Africa. Countries bordering the Mediterranean Sea and in southern Africa (i.e., Egypt, Libya, Tunisia, Algeria, Morocco, South Africa, Lesotho, Swaziland, and Western Sahara) have maximum potential biomass and land cover information but do not have biomass or carbon estimate. The database was developed using the GRID module in the ARC/INFO{sup TM} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass-carbon values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less

  16. Actual and potential transpiration and carbon assimilation in an irrigated poplar plantation.

    PubMed

    Kim, Hyun-Seok; Oren, Ram; Hinckley, Thomas M

    2008-04-01

    We examined the tradeoffs between stand-level water use and carbon uptake that result when biomass production of trees in plantations is maximized by removing nutrient and water limitations. A Populus trichocarpa Torr. x P. deltoides Bartr. & Marsh. plantation was irrigated and received frequent additions of nutrients to optimize biomass production. Sap flux density was measured continuously over four of the six growing-season months, supplemented with periodic measurements of leaf gas exchange and water potential. Measurements of tree diameter and height were used to estimate leaf area and biomass production based on allometric relationships. Sap flux was converted to canopy conductance and analyzed with an empirical model to isolate the effects of water limitation. Actual and soil-water-unlimited potential CO(2) uptakes were estimated with a canopy conductance constrained carbon assimilation (4C-A) scheme, which couples actual or potential canopy conductance with vertical gradients of light distribution, leaf-level conductance, maximum Rubisco capacity and maximum electron transport. Net primary production (NPP) was about 43% of gross primary production (GPP); when estimated for individual trees, this ratio was independent of tree size. Based on the NPP/GPP ratio, we found that current irrigation reduced growth by about 18% compared with growth with no water limitation. To achieve maximum growth, however, would require 70% more water for transpiration, and would reduce water-use efficiency by 27%, from 1.57 to 1.15 g stem wood C kg(-1) water. Given the economic and social values of water, plantation managers appear to have optimized water use.

  17. Estimation of potential maximum biomass of trout in Wyoming streams to assist management decisions

    USGS Publications Warehouse

    Hubert, W.A.; Marwitz, T.D.; Gerow, K.G.; Binns, N.A.; Wiley, R.W.

    1996-01-01

    Fishery managers can benefit from knowledge of the potential maximum biomass (PMB) of trout in streams when making decisions on the allocation of resources to improve fisheries. Resources are most likely to he expended on streams with high PMB and with large differences between PMB and currently measured biomass. We developed and tested a model that uses four easily measured habitat variables to estimate PMB (upper 90th percentile of predicted mean bid mass) of trout (Oncorhynchus spp., Salmo trutta, and Salvelinus fontinalis) in Wyoming streams. The habitat variables were proportion of cover, elevation, wetted width, and channel gradient. The PMB model was constructed from data on 166 stream reaches throughout Wyoming and validated on an independent data set of 50 stream reaches. Prediction of PMB in combination with estimation of current biomass and information on habitat quality can provide managers with insight into the extent to which management actions may enhance trout biomass.

  18. D-PSA-K: A Model for Estimating the Accumulated Potential Damage on Kiwifruit Canes Caused by Bacterial Canker during the Growing and Overwintering Seasons.

    PubMed

    Do, Ki Seok; Chung, Bong Nam; Joa, Jae Ho

    2016-12-01

    We developed a model, termed D-PSA-K, to estimate the accumulated potential damage on kiwifruit canes caused by bacterial canker during the growing and overwintering seasons. The model consisted of three parts including estimation of the amount of necrotic lesion in a non-frozen environment, the rate of necrosis increase in a freezing environment during the overwintering season, and the amount of necrotic lesion on kiwifruit canes caused by bacterial canker during the overwintering and growing seasons. We evaluated the model's accuracy by comparing the observed maximum disease incidence on kiwifruit canes against the damage estimated using weather and disease data collected at Wando during 1994-1997 and at Seogwipo during 2014-2015. For the Hayward cultivar, D-PSA-K estimated the accumulated damage as approximately nine times the observed maximum disease incidence. For the Hort16A cultivar, the accumulated damage estimated by D-PSA-K was high when the observed disease incidence was high. D-PSA-K could assist kiwifruit growers in selecting optimal sites for kiwifruit cultivation and establishing improved production plans by predicting the loss in kiwifruit production due to bacterial canker, using past weather or future climate change data.

  19. A computer program for estimating instream travel times and concentrations of a potential contaminant in the Yellowstone River, Montana

    USGS Publications Warehouse

    McCarthy, Peter M.

    2006-01-01

    The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.

  20. What controls the maximum magnitude of injection-induced earthquakes?

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.

  1. Changes in Cirrus Cloudiness and their Relationship to Contrails

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Ayers, J. Kirk; Palikonda, Rabindra; Doelling, David R.; Schumann, Ulrich; Gierens, Klaus

    2001-01-01

    Condensation trails, or contrails, formed in the wake of high-altitude aircraft have long been suspected of causing the formation of additional cirrus cloud cover. More cirrus is possible because 10 - 20% of the atmosphere at typical commercial flight altitudes is clear but ice-saturated. Since they can affect the radiation budget like natural cirrus clouds of equivalent optical depth and microphysical properties, contrail -generated cirrus clouds are another potential source of anthropogenic influence on climate. Initial estimates of contrail radiative forcing (CRF) were based on linear contrail coverage and optical depths derived from a limited number of satellite observations. Assuming that such estimates are accurate, they can be considered as the minimum possible CRF because contrails often develop into cirrus clouds unrecognizable as contrails. These anthropogenic cirrus are not likely to be identified as contrails from satellites and would, therefore, not contribute to estimates of contrail coverage. The mean lifetime and coverage of spreading contrails relative to linear contrails are needed to fully assess the climatic effect of contrails, but are difficult to measure directly. However, the maximum possible impact can be estimated using the relative trends in cirrus coverage over regions with and without air traffic. In this paper, the upper bound of CRF is derived by first computing the change in cirrus coverage over areas with heavy air traffic relative to that over the remainder of the globe assuming that the difference between the two trends is due solely to contrails. This difference is normalized to the corresponding linear contrail coverage for the same regions to obtain an average spreading factor. The maximum contrail-cirrus coverage, estimated as the product of the spreading factor and the linear contrail coverage, is then used in the radiative model to estimate the maximum potential CRF for current air traffic.

  2. A semi-empirical model for the estimation of maximum horizontal displacement due to liquefaction-induced lateral spreading

    USGS Publications Warehouse

    Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer

    2006-01-01

    During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.

  3. Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.

    PubMed

    Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S

    1996-03-01

    An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.

  4. Estimation of organic carbon loss potential in north of Iran

    NASA Astrophysics Data System (ADS)

    Shahriari, A.; Khormali, F.; Kehl, M.; Welp, G.; Scholz, Ch.

    2009-04-01

    The development of sustainable agricultural systems requires techniques that accurately monitor changes in the amount, nature and breakdown rate of soil organic matter and can compare the rate of breakdown of different plant or animal residues under different management systems. In this research, the study area includes the southern alluvial and piedmont plains of Gorgan River extended from east to west direction in Golestan province, Iran. Samples from 10 soil series and were collected from cultivation depth (0-30 cm). Permanganate-oxidizable carbon (POC) an index of soil labile carbon, was used to show soil potential loss of organic carbon. In this index shows the maximum loss of OC in a given soil. Maximum loss of OC for each soil series was estimated through POC and bulk density (BD). The potential loss of OC were estimated between 1253263 and 2410813 g/ha Carbon. Stable organic constituents in the soil include humic substances and other organic macromolecules that are intrinsically resistant against microbial attack, or that are physically protected by adsorption on mineral surfaces or entrapment within clay and mineral aggregates. However, the (Clay + Silt)/OC ratio had a negative significant (p < 0.001) correlation with POC content, confirming the preserving effect of fine particle.

  5. Comparison of polar cap potential drops estimated from solar wind and ground magnetometer data - CDAW 6

    NASA Technical Reports Server (NTRS)

    Reiff, P. H.; Spiro, R. W.; Wolf, R. A.; Kamide, Y.; King, J. H.

    1985-01-01

    It is pointed out that the maximum electrostatic potential difference across the polar cap, Phi, is a fundamental measure of the coupling between the solar wind and the earth's magnetosphere/ionosphere sytem. During the Coordinated Data Analysis Workshop (CDAW) 6 intervals, no suitably instrumented spacecraft was in an appropriate orbit to determine the polar-cap potential drop directly. However, two recently developed independent techniques make it possible to estimate the polar-cap potential drop for times when direct spacecraft data are not available. The present investigation is concerned with a comparison of cross-polar-cap potential drop estimates calculated for the two CDAW 6 intervals on the basis of these two techniques. In the case of one interval, the agreement between the potential drops and Joule heating rates is relatively good. In the second interval, however, the agreement is not very good. Explanations for this discrepancy are discussed.

  6. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  7. Comparing different policy scenarios to reduce the consumption of ultra-processed foods in UK: impact on cardiovascular disease mortality using a modelling approach.

    PubMed

    Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin

    2015-01-01

    The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system. It highlights the crucial importance of implementing healthier UK food policies.

  8. Interaction between air pollution dispersion and residential heating demands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.W.; Moskowitz, P.D.; Dungan, J.

    The effect of the short-term correlation of a specific emission (sulfur dioxide) from residential space heating, with air pollution dispersion rates on the accuracy of model estimates of urban air pollution on a seasonal or annual basis is analyzed. Hourly climatological and residential emission estimates for six U.S. cities and a simplified area source-dispersion model based on a circular receptor grid are used. The effect on annual average concentration estimations is found to be slight (approximately + or - 12 percent), while the maximum hourly concentrations are shown to vary considerably more, since maximum heat demand and worst-case dispersion aremore » not coincident. Accounting for the correlations between heating demand and dispersion makes possible a differentiation in air pollution potential between coastal and interior cities.« less

  9. Feasibility and potential effects of the proposed Amargosa Creek Recharge Project, Palmdale, California

    USGS Publications Warehouse

    Christensen, Allen H.; Siade, Adam J.; Martin, Peter; Langenheim, V.E.; Catchings, Rufus D.; Burgess, Matthew K.

    2015-09-17

    The hydraulic conductivities of faults were estimated on the basis of water-level data and an estimate of natural recharge along Amargosa Creek. With assumed horizontal hydraulic conductivities of 10 and 100 feet per day in the upper 150 feet, the simulated maximum artificial recharge rates to the regional flow system at the ACRP were 3,400 and 9,400 acre-feet per year, respectively. These maximum recharge rates were limited primarily by the horizontal hydraulic conductivity in the upper 150 feet and by the liquefaction constraint. Future monitoring of water-level and soil-water content changes during the proposed project would allow improved estimation of aquifer hydraulic properties, the effect of the faults on groundwater movement, and the overall recharge capacity of the ACRP.

  10. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Mao, Lu; Lin, D. Y.

    2016-01-01

    Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  11. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  12. Two methods for estimating limits to large-scale wind power generation

    PubMed Central

    Miller, Lee M.; Brunsell, Nathaniel A.; Mechem, David B.; Gans, Fabian; Monaghan, Andrew J.; Vautard, Robert; Keith, David W.; Kleidon, Axel

    2015-01-01

    Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 105 km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m−2, whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m−2, with VKE capturing this combination in a comparatively simple way. PMID:26305925

  13. Radiation Exposure and Attributable Cancer Risk in Patients With Esophageal Atresia.

    PubMed

    Yousef, Yasmine; Baird, Robert

    2018-02-01

    Cases of esophageal carcinoma have been documented in survivors of esophageal atresia (EA). Children with EA undergo considerable amounts of diagnostic imaging and consequent radiation exposure potentially increasing their lifetime cancer mortality risk. This study evaluates the radiological procedures performed on patients with EA and estimates their cumulative radiation exposure and attributable lifetime cancer mortality risk. Medical records of patients with EA managed at a tertiary care center were reviewed for demographics, EA subtype, and number and type of radiological investigations. Existing normative data were used to estimate the cumulative radiation exposure and lifetime cancer risk per patient. The present study included 53 patients with a mean follow-up of 5.7 years. The overall median and maximum estimated effective radiation dose in the neonatal period was 5521.4 μSv/patient and 66638.6 μSv/patient, respectively. This correlates to a median and maximum estimated cumulative lifetime cancer mortality risk of 1:1530 and 1:130, respectively. Hence, radiation exposure in the neonatal period increased the cumulative cancer mortality risk a median of 130-fold and a maximum of 1575-fold in EA survivors. Children with EA are exposed to significant amounts of radiation and an increased estimated cumulative cancer mortality risk. Efforts should be made to eliminate superfluous imaging.

  14. Predicting variability of aquatic concentrations of human pharmaceuticals

    EPA Science Inventory

    Potential exposure to active pharmaceutical ingredients (APIs) in the aquatic environment is a subject of ongoing concern. We recently estimated maximum likely potency-normalized exposure rates at the national level for several hundred commonly used human prescription pharmaceut...

  15. Chair rise transfer detection and analysis using a pendant sensor: an algorithm for fall risk assessment in older people.

    PubMed

    Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren

    2014-01-01

    Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.

  16. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  17. Ballistocardiogram as Proximal Timing Reference for Pulse Transit Time Measurement: Potential for Cuffless Blood Pressure Monitoring

    PubMed Central

    Kim, Chang-Sei; Carek, Andrew M.; Mukkamala, Ramakrishna; Inan, Omer T.; Hahn, Jin-Oh

    2015-01-01

    Goal We tested the hypothesis that the ballistocardiogram (BCG) waveform could yield a viable proximal timing reference for measuring pulse transit time (PTT). Methods From fifteen healthy volunteers, we measured PTT as the time interval between BCG and a non-invasively measured finger blood pressure (BP) waveform. To evaluate the efficacy of the BCG-based PTT in estimating BP, we likewise measured pulse arrival time (PAT) using the electrocardiogram (ECG) as proximal timing reference and compared their correlations to BP. Results BCG-based PTT was correlated with BP reasonably well: the mean correlation coefficient (r) was 0.62 for diastolic (DP), 0.65 for mean (MP) and 0.66 for systolic (SP) pressures when the intersecting tangent method was used as distal timing reference. Comparing four distal timing references (intersecting tangent, maximum second derivative, diastolic minimum and systolic maximum), PTT exhibited the best correlation with BP when the systolic maximum method was used (mean r value was 0.66 for DP, 0.67 for MP and 0.70 for SP). PTT was more strongly correlated with DP than PAT regardless of the distal timing reference: mean r value was 0.62 versus 0.51 (p=0.07) for intersecting tangent, 0.54 versus 0.49 (p=0.17) for maximum second derivative, 0.58 versus 0.52 (p=0.37) for diastolic minimum, and 0.66 versus 0.60 (p=0.10) for systolic maximum methods. The difference between PTT and PAT in estimating DP was significant (p=0.01) when the r values associated with all the distal timing references were compared altogether. However, PAT appeared to outperform PTT in estimating SP (p=0.31 when the r values associated with all the distal timing references were compared altogether). Conclusion We conclude that BCG is an adequate proximal timing reference in deriving PTT, and that BCG-based PTT may be superior to ECG-based PAT in estimating DP. Significance PTT with BCG as proximal timing reference has potential to enable convenient and ubiquitous cuffless BP monitoring. PMID:26054058

  18. MAXIMIZE THE EFFICIENCY OF PUMP AND TREAT SYSTEMS

    EPA Science Inventory

    This paper focuses on methodology for determing extent of hydraulic control and remediation effectiveness of site specific pump and treat systems. Maximum potential well yield is estimated on the basis of hydraulic characteristics described by the cooper and Jacob Equation. A ma...

  19. Potential-scour assessments and estimates of maximum scour at selected bridges in Iowa

    USGS Publications Warehouse

    Fischer, E.E.

    1995-01-01

    Although the abutment-scour equation predicted deep scour holes at many of the sites, the only significant abutment scour that was measured was erosion of the embankment at the left abutment at one bridge after a flood.

  20. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  1. Developability assessment of clinical drug products with maximum absorbable doses.

    PubMed

    Ding, Xuan; Rose, John P; Van Gelder, Jan

    2012-05-10

    Maximum absorbable dose refers to the maximum amount of an orally administered drug that can be absorbed in the gastrointestinal tract. Maximum absorbable dose, or D(abs), has proved to be an important parameter for quantifying the absorption potential of drug candidates. The purpose of this work is to validate the use of D(abs) in a developability assessment context, and to establish appropriate protocol and interpretation criteria for this application. Three methods for calculating D(abs) were compared by assessing how well the methods predicted the absorption limit for a set of real clinical candidates. D(abs) was calculated for these clinical candidates by means of a simple equation and two computer simulation programs, GastroPlus and an program developed at Eli Lilly and Company. Results from single dose escalation studies in Phase I clinical trials were analyzed to identify the maximum absorbable doses for these compounds. Compared to the clinical results, the equation and both simulation programs provide conservative estimates of D(abs), but in general D(abs) from the computer simulations are more accurate, which may find obvious advantage for the simulations in developability assessment. Computer simulations also revealed the complex behavior associated with absorption saturation and suggested in most cases that the D(abs) limit is not likely to be achieved in a typical clinical dose range. On the basis of the validation findings, an approach is proposed for assessing absorption potential, and best practices are discussed for the use of D(abs) estimates to inform clinical formulation development strategies. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India.

    PubMed

    Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R

    2012-11-01

    The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.

  4. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Estimating the potential refolding yield of recombinant proteins expressed as inclusion bodies.

    PubMed

    Ho, Jason G S; Middelberg, Anton P J

    2004-09-05

    Recombinant protein production in bacteria is efficient except that insoluble inclusion bodies form when some gene sequences are expressed. Such proteins must undergo renaturation, which is an inefficient process due to protein aggregation on dilution from concentrated denaturant. In this study, the protein-protein interactions of eight distinct inclusion-body proteins are quantified, in different solution conditions, by measurement of protein second virial coefficients (SVCs). Protein solubility is shown to decrease as the SVC is reduced (i.e., as protein interactions become more attractive). Plots of SVC versus denaturant concentration demonstrate two clear groupings of proteins: a more aggregative group and a group having higher SVC and better solubility. A correlation of the measured SVC with protein molecular weight and hydropathicity, that is able to predict which group each of the eight proteins falls into, is presented. The inclusion of additives known to inhibit aggregation during renaturation improves solubility and increases the SVC of both protein groups. Furthermore, an estimate of maximum refolding yield (or solubility) using high-performance liquid chromatography was obtained for each protein tested, under different environmental conditions, enabling a relationship between "yield" and SVC to be demonstrated. Combined, the results enable an approximate estimation of the maximum refolding yield that is attainable for each of the eight proteins examined, under a selected chemical environment. Although the correlations must be tested with a far larger set of protein sequences, this work represents a significant move beyond empirical approaches for optimizing renaturation conditions. The approach moves toward the ideal of predicting maximum refolding yield using simple bioinformatic metrics that can be estimated from the gene sequence. Such a capability could potentially "screen," in silico, those sequences suitable for expression in bacteria from those that must be expressed in more complex hosts.

  6. Libya, Algeria and Egypt: crude oil potential from known deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietzman, W.D.; Rafidi, N.R.; Ross, T.A.

    1982-04-01

    An analysis is presented of the discovered crude oil resources, reserves, and estimated annual production from known fields of the Republics of Libya, Algeria, and Egypt. Proved reserves are defined as the remaining producible oil as of a specified date under operating practice in effect at that time and include estimated recoverable oil in undrilled portions of a given structure or structures. Also included in the proved reserve category are the estimated indicated additional volumes of recoverable oil from the entire oil reservoir where fluid injection programs have been started in a portion, or portions, of the reservoir. The indicatedmore » additional reserves (probable reserves) reported herein are the volumes of crude oil that might be obtained with the installation of secondary recovery or pressure maintenance operations in reservoirs where none have been previously installed. The sum of cumulative production, proved reserves, and probable reserves is defined as the ultimate oil recovery from known deposits; and resources are defined as the original oil in place (OOIP). An assessment was made of the availability of crude oil under three assumed sustained production rates for each country; an assessment was also made of each country's capability of sustaining production at, or near, the 1980 rates assuming different limiting reserve to production ratios. Also included is an estimate of the potential maximum producing capability from known deposits that might be obtained from known accumulations under certain assumptions, using a simple time series approach. The theoretical maximum oil production capability from known fields at any time is the maximum deliverability rate assuming there are no equipment, investment, market, or political constraints.« less

  7. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. Estimating the effects of potential climate and land use changes on hydrologic processes of a large agriculture dominated watershed

    NASA Astrophysics Data System (ADS)

    Neupane, Ram P.; Kumar, Sandeep

    2015-10-01

    Land use and climate are two major components that directly influence catchment hydrologic processes, and therefore better understanding of their effects is crucial for future land use planning and water resources management. We applied Soil and Water Assessment Tool (SWAT) to assess the effects of potential land use change and climate variability on hydrologic processes of large agriculture dominated Big Sioux River (BSR) watershed located in North Central region of USA. Future climate change scenarios were simulated using average output of temperature and precipitation data derived from Special Report on Emission Scenarios (SRES) (B1, A1B, and A2) for end-21st century. Land use change was modeled spatially based on historic long-term pattern of agricultural transformation in the basin, and included the expansion of corn (Zea mays L.) cultivation by 2, 5, and 10%. We estimated higher surface runoff in all land use scenarios with maximum increase of 4% while expanding 10% corn cultivation in the basin. Annual stream discharge was estimated higher with maximum increase of 72% in SRES-B1 attributed from higher groundwater contribution of 152% in the same scenario. We assessed increased precipitation during spring season but the summer precipitation decreased substantially in all climate change scenarios. Similar to decreased summer precipitation, discharge of the BSR also decreased potentially affecting agricultural production due to reduced future water availability during crop growing season in the basin. However, combined effects of potential land use change with climate variability enhanced for higher annual discharge of the BSR. Therefore, these estimations can be crucial for implications of future land use planning and water resources management of the basin.

  10. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  11. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  12. Testing the Bouchet-Morton Complementary Hypothesis at Harvard Forest using Sap Flux Data

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.; Phillips, N. G.; Daley, M. J.

    2005-12-01

    The Bouchet-Morton Complementary Relationship (CR) states that at a given surface moisture availability (MA), changes in actual evapotranspiration (ETa) are reflected in changes in potential evapotranspiration (ETp) such that ETa + ETp = 2ET0, where ET0 is an assumed equilibrium evaporation condition at which ETa = ETp = ET0 at maximum MA. Whereas ETp conceptually includes a potential transpiration component, existing CR model estimates of ET_ {p are based upon the Penman combination equation for open water evaporation (ETp,Pen). Recent CR investigations for a temperate grassland at FIFE suggest, however, that the convergence between ETa and ETp,Pen will only occur if a maximum canopy conductance is included in the estimation of ETp. The purpose of this study was to conduct a field investigation at Harvard Forest to test the hypothesis that a CR-type relationship should occur between red maple ( Acer rubrum L.) actual transpiration and red maple potential transpiration, i.e., transpiration given unlimited root- zone MA via localized irrigation. Just as pan evaporation (ETp,Pen) is a physical gauge of ETp, we therefore question whether a well- irrigated maple is a potential transpirator. Daily averages of whole-tree transpiration for our co- occurring irrigated red maple network and reference network were calculated using high-frequency constant-heat sap flux sensor (i.e., Granier-type) measurements. Soil moisture, temperature and matric potential parameters were measured using Campbell Scientific sensors. Preliminary results suggest that the relationship between potential and actual transpiration differs significantly from ETa and ETp,Pen in the context of CR, adding useful insight into both ETp estimation and the understanding of physiological response to MA variability.

  13. 40 CFR 63.1270 - Applicability and designation of affected source.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... value. For estimating maximum potential emissions from glycol dehydration units, the glycol circulation... existing glycol dehydration unit specified in paragraphs (b)(1) through (3) of this section. (1) Each large glycol dehydration unit; (2) Each small glycol dehydration unit for which construction commenced on or...

  14. 40 CFR 63.1270 - Applicability and designation of affected source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... value. For estimating maximum potential emissions from glycol dehydration units, the glycol circulation... existing glycol dehydration unit specified in paragraphs (b)(1) through (3) of this section. (1) Each large glycol dehydration unit; (2) Each small glycol dehydration unit for which construction commenced on or...

  15. Growth, productivity, and relative extinction risk of a data-sparse devil ray

    PubMed Central

    Pardo, Sebastián A.; Kindsvater, Holly K.; Cuevas-Zimbrón, Elizabeth; Sosa-Nishizaki, Oscar; Pérez-Jiménez, Juan Carlos; Dulvy, Nicholas K.

    2016-01-01

    Devil rays (Mobula spp.) face intensifying fishing pressure to meet the ongoing international demand for gill plates. The paucity of information on growth, mortality, and fishing effort for devil rays make quantifying population growth rates and extinction risk challenging. Furthermore, unlike manta rays (Manta spp.), devil rays have not been listed on CITES. Here, we use a published size-at-age dataset for the Spinetail Devil Ray (Mobula japanica), to estimate somatic growth rates, age at maturity, maximum age, and natural and fishing mortality. We then estimate a plausible distribution of the maximum intrinsic population growth rate (rmax) and compare it to 95 other chondrichthyans. We find evidence that larger devil ray species have low somatic growth rate, low annual reproductive output, and low maximum population growth rates, suggesting they have low productivity. Fishing rates of a small-scale artisanal Mexican fishery were comparable to our estimate of rmax, and therefore probably unsustainable. Devil ray rmax is very similar to that of manta rays, indicating devil rays can potentially be driven to local extinction at low levels of fishing mortality and that a similar degree of protection for both groups is warranted. PMID:27658342

  16. 40 CFR 63.760 - Applicability and designation of affected source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... estimating maximum potential emissions from glycol dehydration units, the glycol circulation rate used in the...) Each glycol dehydration unit as specified in paragraphs (b)(1)(i)(A) through (C) of this section. (A) Each large glycol dehydration unit; (B) Each small glycol dehydration unit for which construction...

  17. 40 CFR 63.760 - Applicability and designation of affected source.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... estimating maximum potential emissions from glycol dehydration units, the glycol circulation rate used in the...) Each glycol dehydration unit as specified in paragraphs (b)(1)(i)(A) through (C) of this section. (A) Each large glycol dehydration unit; (B) Each small glycol dehydration unit for which construction...

  18. Estimated winter wheat yield from crop growth predicted by LANDSAT

    NASA Technical Reports Server (NTRS)

    Kanemasu, E. T.

    1977-01-01

    An evapotranspiration and growth model for winter wheat is reported. The inputs are daily solar radiation, maximum temperature, minimum temperature, precipitation/irrigation and leaf area index. The meteorological data were obtained from National Weather Service while LAI was obtained from LANDSAT multispectral scanner. The output provides daily estimates of potential evapotranspiration, transpiration, evaporation, soil moisture (50 cm depth), percentage depletion, net photosynthesis and dry matter production. Winter wheat yields are correlated with transpiration and dry matter accumulation.

  19. The evaluation of maximum horizontal in-situ stress using the wellbore imagers data

    NASA Astrophysics Data System (ADS)

    Dubinya, N. V.; Ezhov, K. A.

    2016-12-01

    Well drilling provides a number of possibilities to improve the knowledge of stress state of the upper layers of the Earth crust. The data obtained from drilling, well logging, core experiments and special tests is used to evaluate the principal stresses' directions and magnitudes. Although the values of vertical stress and minimum horizontal stress may be decently estimated, the maximum horizontal stress remains a major problem. In this study a new method to estimate this value is proposed. The suggested approach is based on the concept of hydraulically conductive and non-conductive fractures near a wellbore (Barton, Zoback and Moos, 1995). It was stated that all the fractures which properties may be acquired from well logging data can be divided into two groups regarding hydraulic conductivity. The fracture properties and the in-situ stress state are put in relationship via the Mohr diagram. This approach was later used by Ito and Zoback (2000) to estimate the magnitude of the maximum horizontal stress from the temperature profiles. In the current study ultrasonic and resistivity borehole imaging are used to estimate the magnitude of maximum horizontal stress in rather precise way. After proper interpretation one is able to obtain orientation and hydraulic conductivity for each fracture appeared at the images. If the proper profiles of vertical and minimum horizontal stresses are known all the fractures may be analyzed at the Mohr diagram. Alteration of maximum horizontal stress profile grants an opportunity to adjust it so the conductive fractures at the Mohr diagram fit the data from imagers' interpretation. The precision of the suggested approach was evaluated for several oil production wells in Siberia with decent wellbore stability models. It appeared that the difference between maximum horizontal stress estimated in a suggested approach and the values obtained from drilling reports did not exceed 0.5 MPa. Thus the proposed approach may be used to evaluate the values of maximum horizontal stress using the wellbore imagers' data. References Barton, C.A., Zoback, M.D., Moos, D. Fluid flow along potentially active faults in crystalline rock - Geology, 1995. T. Ito, M. Zoback, Fracture permeability and in situ stress to 7 km depth in the KTB Scientific Drillhole, Geophysical Research Letters, 2000.

  20. A Study on Grid-Square Statistics Based Estimation of Regional Electricity Demand and Regional Potential Capacity of Distributed Generators

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Sugimoto, Hiroyuki; Suzuoki, Yasuo

    We established a procedure for estimating regional electricity demand and regional potential capacity of distributed generators (DGs) by using a grid square statistics data set. A photovoltaic power system (PV system) for residential use and a co-generation system (CGS) for both residential and commercial use were taken into account. As an example, the result regarding Aichi prefecture was presented in this paper. The statistical data of the number of households by family-type and the number of employees by business category for about 4000 grid-square with 1km × 1km area was used to estimate the floor space or the electricity demand distribution. The rooftop area available for installing PV systems was also estimated with the grid-square statistics data set. Considering the relation between a capacity of existing CGS and a scale-index of building where CGS is installed, the potential capacity of CGS was estimated for three business categories, i.e. hotel, hospital, store. In some regions, the potential capacity of PV systems was estimated to be about 10,000kW/km2, which corresponds to the density of the existing area with intensive installation of PV systems. Finally, we discussed the ratio of regional potential capacity of DGs to regional maximum electricity demand for deducing the appropriate capacity of DGs in the model of future electricity distribution system.

  1. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  2. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  3. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  4. Vertical Jump Height Estimation Algorithm Based on Takeoff and Landing Identification Via Foot-Worn Inertial Sensing.

    PubMed

    Wang, Jianren; Xu, Junkai; Shull, Peter B

    2018-03-01

    Vertical jump height is widely used for assessing motor development, functional ability, and motor capacity. Traditional methods for estimating vertical jump height rely on force plates or optical marker-based motion capture systems limiting assessment to people with access to specialized laboratories. Current wearable designs need to be attached to the skin or strapped to an appendage which can potentially be uncomfortable and inconvenient to use. This paper presents a novel algorithm for estimating vertical jump height based on foot-worn inertial sensors. Twenty healthy subjects performed countermovement jumping trials and maximum jump height was determined via inertial sensors located above the toe and under the heel and was compared with the gold standard maximum jump height estimation via optical marker-based motion capture. Average vertical jump height estimation errors from inertial sensing at the toe and heel were -2.2±2.1 cm and -0.4±3.8 cm, respectively. Vertical jump height estimation with the presented algorithm via inertial sensing showed excellent reliability at the toe (ICC(2,1)=0.98) and heel (ICC(2,1)=0.97). There was no significant bias in the inertial sensing at the toe, but proportional bias (b=1.22) and fixed bias (a=-10.23cm) were detected in inertial sensing at the heel. These results indicate that the presented algorithm could be applied to foot-worn inertial sensors to estimate maximum jump height enabling assessment outside of traditional laboratory settings, and to avoid bias errors, the toe may be a more suitable location for inertial sensor placement than the heel.

  5. Acceleration of high resolution temperature based optimization for hyperthermia treatment planning using element grouping.

    PubMed

    Kok, H P; de Greef, M; Bel, A; Crezee, J

    2009-08-01

    In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.

  6. Impacts of Potential Aircraft Observations on Forecasts of Tropical Cyclones Over the Western North Pacific

    DTIC Science & Technology

    2014-12-01

    anticyclone. Vertical wind shear was low, while a moderate level of upper level diffluence existed. The minimum sea level pressure ( SLP ) was estimated...pre-Sinlaku disturbance. At this time, JTWC estimated maximum surface level winds to be 15 to 20 kt, with a SLP near 1005 hPa. 17 Figure 11...poleward side of the circulation. Surface winds had increased to near 23 kt as the SLP continued to fall to 1004 hPa. JTWC forecasters upgraded the

  7. Characterization of food waste-recycling wastewater as biogas feedstock.

    PubMed

    Shin, Seung Gu; Han, Gyuseong; Lee, Joonyeob; Cho, Kyungjin; Jeon, Eun-Jeong; Lee, Changsoo; Hwang, Seokhwan

    2015-11-01

    A set of experiments was carried out to characterize food waste-recycling wastewater (FRW) and to investigate annual and seasonal variations in composition, which is related to the process operation in different seasons. Year-round samplings (n=31) showed that FRW contained high chemical oxygen demand (COD; 148.7±30.5g/L) with carbohydrate (15.6%), protein (19.9%), lipid (41.6%), ethanol (14.0%), and volatile fatty acids (VFAs; 4.2%) as major constituents. FRW was partly (62%) solubilized, possibly due to partial fermentation of organics including carbohydrate. Biodegradable portions of carbohydrate and protein were estimated from acidogenesis test by first-order kinetics: 72.9±4.6% and 37.7±0.3%, respectively. A maximum of 50% of the initial organics were converted to three major VFAs, which were acetate, propionate, and butyrate. The methane potential was estimated as 0.562L CH4/g VSfeed, accounting for 90.0% of the theoretical maximum estimated by elemental analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  9. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  10. Agricultural costs of the Chesapeake Bay total maximum daily load.

    PubMed

    Kaufman, Zach; Abler, David; Shortle, James; Harper, Jayson; Hamlett, James; Feather, Peter

    2014-12-16

    This study estimates costs to agricultural producers of the Watershed Implementation Plans (WIPs) developed by states in the Chesapeake Bay Watershed to comply with the Chesapeake Bay total maximum daily load (TMDL) and potential cost savings that could be realized by a more efficient selection of agricultural Best Management Practices (BMPs) and spatial targeting of BMP implementation. The cost of implementing the WIPs between 2011 and 2025 is estimated to be about $3.6 billion (in 2010 dollars). The annual cost associated with full implementation of all WIP BMPs from 2025 onward is about $900 million. Significant cost savings can be realized through careful and efficient BMP selection and spatial targeting. If retiring up to 25% of current agricultural land is included as an option, Bay-wide cost savings of about 60% could be realized compared to the WIPs.

  11. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  12. F-8C adaptive flight control laws

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.

    1977-01-01

    Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.

  13. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  14. A Simulation Based Analysis of Motor Unit Number Index (MUNIX) Technique Using Motoneuron Pool and Surface Electromyogram Models

    PubMed Central

    Li, Xiaoyan; Rymer, William Zev; Zhou, Ping

    2013-01-01

    Motor unit number index (MUNIX) measurement has recently achieved increasing attention as a tool to evaluate the progression of motoneuron diseases. In our current study, the sensitivity of the MUNIX technique to changes in motoneuron and muscle properties was explored by a simulation approach utilizing variations on published motoneuron pool and surface electromyogram (EMG) models. Our simulation results indicate that, when keeping motoneuron pool and muscle parameters unchanged and varying the input motor unit numbers to the model, then MUNIX estimates can appropriately characterize changes in motor unit numbers. Such MUNIX estimates are not sensitive to different motor unit recruitment and rate coding strategies used in the model. Furthermore, alterations in motor unit control properties do not have a significant effect on the MUNIX estimates. Neither adjustment of the motor unit recruitment range nor reduction of the motor unit firing rates jeopardizes the MUNIX estimates. The MUNIX estimates closely correlate with the maximum M wave amplitude. However, if we reduce the amplitude of each motor unit action potential rather than simply reduce motor unit number, then MUNIX estimates substantially underestimate the motor unit numbers in the muscle. These findings suggest that the current MUNIX definition is most suitable for motoneuron diseases that demonstrate secondary evidence of muscle fiber reinnervation. In this regard, when MUNIX is applied, it is of much importance to examine a parallel measurement of motor unit size index (MUSIX), defined as the ratio of the maximum M wave amplitude to the MUNIX. However, there are potential limitations in the application of the MUNIX methods in atrophied muscle, where it is unclear whether the atrophy is accompanied by loss of motor units or loss of muscle fiber size. PMID:22514208

  15. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  16. Field estimates of floc dynamics and settling velocities in a tidal creek with significant along-channel gradients in velocity and SPM

    NASA Astrophysics Data System (ADS)

    Schwarz, C.; Cox, T.; van Engeland, T.; van Oevelen, D.; van Belzen, J.; van de Koppel, J.; Soetaert, K.; Bouma, T. J.; Meire, P.; Temmerman, S.

    2017-10-01

    A short-term intensive measurement campaign focused on flow, turbulence, suspended particle concentration, floc dynamics and settling velocities were carried out in a brackish intertidal creek draining into the main channel of the Scheldt estuary. We compare in situ estimates of settling velocities between a laser diffraction (LISST) and an acoustic Doppler technique (ADV) at 20 and 40 cm above bottom (cmab). The temporal variation in settling velocity estimated were compared over one tidal cycle, with a maximum flood velocity of 0.46 m s-1, a maximum horizontal ebb velocity of 0.35 m s-1 and a maximum water depth at high water slack of 2.41 m. Results suggest that flocculation processes play an important role in controlling sediment transport processes in the measured intertidal creek. During high-water slack, particles flocculated to sizes up to 190 μm, whereas at maximum flood and maximum ebb tidal stage floc sizes only reached up to 55 μm and 71 μm respectively. These large differences indicate that flocculation processes are mainly governed by turbulence-induced shear rate. In this study, we specifically recognize the importance of along-channel gradients that places constraints on the application of the acoustic Doppler technique due to conflicts with the underlying assumptions. Along-channel gradients were assessed by additional measurements at a second location and scaling arguments which could be used as an indication whether the Reynolds-flux method is applicable. We further show the potential impact of along-channel advection of flocs out of equilibrium with local hydrodynamics influencing overall floc sizes.

  17. Tropical Africa: Land Use, Biomass, and Carbon Estimates for 1980 (and updated for the year 2000) (NDP-055)

    DOE Data Explorer

    Brown, Sandra [University of Illinois, Urbana, IL (USA); Winrock International, Arlington, Virginia (USA); Gaston, Greg [University of Illinois, Urbana, IL (USA); Oregon State University; Beaty, T. W. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, L. M. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA)

    2001-01-01

    This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10E6 km2 of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea, Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo,Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO (TM geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.

  18. Mineral Carbonation Potential of CO2 from Natural and Industrial-based Alkalinity Sources

    NASA Astrophysics Data System (ADS)

    Wilcox, J.; Kirchofer, A.

    2014-12-01

    Mineral carbonation is a Carbon Capture and Storage (CSS) technology where gaseous CO2 is reacted with alkaline materials (such as silicate minerals and alkaline industrial wastes) and converted into stable and environmentally benign carbonate minerals (Metz et al., 2005). Here, we present a holistic, transparent life cycle assessment model of aqueous mineral carbonation built using a hybrid process model and economic input-output life cycle assessment approach. We compared the energy efficiency and the net CO2 storage potential of various mineral carbonation processes based on different feedstock material and process schemes on a consistent basis by determining the energy and material balance of each implementation (Kirchofer et al., 2011). In particular, we evaluated the net CO2 storage potential of aqueous mineral carbonation for serpentine, olivine, cement kiln dust, fly ash, and steel slag across a range of reaction conditions and process parameters. A preliminary systematic investigation of the tradeoffs inherent in mineral carbonation processes was conducted and guidelines for the optimization of the life-cycle energy efficiency are provided. The life-cycle assessment of aqueous mineral carbonation suggests that a variety of alkalinity sources and process configurations are capable of net CO2 reductions. The maximum carbonation efficiency, defined as mass percent of CO2 mitigated per CO2 input, was 83% for CKD at ambient temperature and pressure conditions. In order of decreasing efficiency, the maximum carbonation efficiencies for the other alkalinity sources investigated were: olivine, 66%; SS, 64%; FA, 36%; and serpentine, 13%. For natural alkalinity sources, availability is estimated based on U.S. production rates of a) lime (18 Mt/yr) or b) sand and gravel (760 Mt/yr) (USGS, 2011). The low estimate assumes the maximum sequestration efficiency of the alkalinity source obtained in the current work and the high estimate assumes a sequestration efficiency of 85%. The total CO2 storage potential for the alkalinity sources considered in the U.S. ranges from 1.3% to 23.7% of U.S. CO2 emissions, depending on the assumed availability of natural alkalinity sources and efficiency of the mineral carbonation processes.

  19. Monthly hydroclimatology of the continental United States

    NASA Astrophysics Data System (ADS)

    Petersen, Thomas; Devineni, Naresh; Sankarasubramanian, A.

    2018-04-01

    Physical/semi-empirical models that do not require any calibration are of paramount need for estimating hydrological fluxes for ungauged sites. We develop semi-empirical models for estimating the mean and variance of the monthly streamflow based on Taylor Series approximation of a lumped physically based water balance model. The proposed models require mean and variance of monthly precipitation and potential evapotranspiration, co-variability of precipitation and potential evapotranspiration and regionally calibrated catchment retention sensitivity, atmospheric moisture uptake sensitivity, groundwater-partitioning factor, and the maximum soil moisture holding capacity parameters. Estimates of mean and variance of monthly streamflow using the semi-empirical equations are compared with the observed estimates for 1373 catchments in the continental United States. Analyses show that the proposed models explain the spatial variability in monthly moments for basins in lower elevations. A regionalization of parameters for each water resources region show good agreement between observed moments and model estimated moments during January, February, March and April for mean and all months except May and June for variance. Thus, the proposed relationships could be employed for understanding and estimating the monthly hydroclimatology of ungauged basins using regional parameters.

  20. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  1. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  2. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  3. Strategic Matching of Teachers and Schools with (and without) Accountability Pressure

    ERIC Educational Resources Information Center

    Ahn, Tom

    2017-01-01

    Accountability systems are designed to introduce market pressures to increase efficiency in education. One potential channel by which this can occur is to match with effective teachers in the transfer market. I use a smooth maximum score estimator model, North Carolina data, and the state's bonus system to analyze how teachers and schools change…

  4. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-04-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  5. GABAergic excitation of spider mechanoreceptors increases information capacity by increasing entropy rather than decreasing jitter.

    PubMed

    Pfeiffer, Keram; French, Andrew S

    2009-09-02

    Neurotransmitter chemicals excite or inhibit a range of sensory afferents and sensory pathways. These changes in firing rate or static sensitivity can also be associated with changes in dynamic sensitivity or membrane noise and thus action potential timing. We measured action potential firing produced by random mechanical stimulation of spider mechanoreceptor neurons during long-duration excitation by the GABAA agonist muscimol. Information capacity was estimated from signal-to-noise ratio by averaging responses to repeated identical stimulation sequences. Information capacity was also estimated from the coherence function between input and output signals. Entropy rate was estimated by a data compression algorithm and maximum entropy rate from the firing rate. Action potential timing variability, or jitter, was measured as normalized interspike interval distance. Muscimol increased firing rate, information capacity, and entropy rate, but jitter was unchanged. We compared these data with the effects of increasing firing rate by current injection. Our results indicate that the major increase in information capacity by neurotransmitter action arose from the increased entropy rate produced by increased firing rate, not from reduction in membrane noise and action potential jitter.

  6. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  7. Evaluation of potential hazard exposure resulting from DOE waste treatment and disposal at Rollins Environmental Services, Baton Rouge, LA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-04-01

    The equivalent dose rate to populations potentially exposed to wastes shipped to Rollins Environmental Services, Baton Rouge, LA from Oak Ridge and Savannah River Operations of the Department of Energy was estimated. Where definitive information necessary to the estimation of a dose rate was unavailable, bounding assumptions were employed to ensure an overestimate of the actual dose rate experienced by the potentially exposed population. On this basis, it was estimated that a total of about 3.85 million pounds of waste was shipped from these DOE operations to Rollins with a maximum combined total activity of about 0.048 Curies. Populations nearmore » the Rollins site could potentially be exposed to the radionuclides in the DOE wastes via the air pathway after incineration of the DOE wastes or by migration from the soil after landfill disposal. AIRDOS was used to estimate the dose rate after incineration. RESRAD was used to estimate the dose rate after landfill disposal. Calculations were conducted with the estimated radioactive specie distribution in the wastes and, as a test of the sensitivity of the results to the estimated distribution, with the entire activity associated with individual radioactive species such as Cs-137, Ba-137, Sr-90, Co-60, U-234, U-235 and U-238. With a given total activity, the dose rates to nearby individuals were dominated by the uranium species.« less

  8. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.

  9. Potential of wind power projects under the Clean Development Mechanism in India

    PubMed Central

    Purohit, Pallav; Michaelowa, Axel

    2007-01-01

    Background So far, the cumulative installed capacity of wind power projects in India is far below their gross potential (≤ 15%) despite very high level of policy support, tax benefits, long term financing schemes etc., for more than 10 years etc. One of the major barriers is the high costs of investments in these systems. The Clean Development Mechanism (CDM) of the Kyoto Protocol provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at lowest cost that also promotes sustainable development in the host country. Wind power projects could be of interest under the CDM because they directly displace greenhouse gas emissions while contributing to sustainable rural development, if developed correctly. Results Our estimates indicate that there is a vast theoretical potential of CO2 mitigation by the use of wind energy in India. The annual potential Certified Emissions Reductions (CERs) of wind power projects in India could theoretically reach 86 million. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. Conclusion The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced. PMID:17663772

  10. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  11. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  12. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  13. Location Modification Factors for Potential Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Barnett, J. Matthew

    2017-01-01

    A Department of Energy facility must comply with the National Emission Standard for Hazardous Air Pollutants for radioactive air emissions. The standard is an effective dose of less than 0.1 mSv yr-1 to the maximum public receptor. Additionally, a lower dose level may be assigned to a specific emission point in a State issued permit. A method to efficiently estimate the expected dose for future emissions is described. This method is most appropriately applied to a research facility with several emission points with generally low emission levels of numerous isotopes.

  14. Estimation of maximum transdermal flux of nonionized xenobiotics from basic physicochemical determinants

    PubMed Central

    Milewski, Mikolaj; Stinchcomb, Audra L.

    2012-01-01

    An ability to estimate the maximum flux of a xenobiotic across skin is desirable both from the perspective of drug delivery and toxicology. While there is an abundance of mathematical models describing the estimation of drug permeability coefficients, there are relatively few that focus on the maximum flux. This article reports and evaluates a simple and easy-to-use predictive model for the estimation of maximum transdermal flux of xenobiotics based on three common molecular descriptors: logarithm of octanol-water partition coefficient, molecular weight and melting point. The use of all three can be justified on the theoretical basis of their influence on the solute aqueous solubility and the partitioning into the stratum corneum lipid domain. The model explains 81% of the variability in the permeation dataset comprised of 208 entries and can be used to obtain a quick estimate of maximum transdermal flux when experimental data is not readily available. PMID:22702370

  15. The effect of prenatal care on birthweight: a full-information maximum likelihood approach.

    PubMed

    Rous, Jeffrey J; Jewell, R Todd; Brown, Robert W

    2004-03-01

    This paper uses a full-information maximum likelihood estimation procedure, the Discrete Factor Method, to estimate the relationship between birthweight and prenatal care. This technique controls for the potential biases surrounding both the sample selection of the pregnancy-resolution decision and the endogeneity of prenatal care. In addition, we use the actual number of prenatal care visits; other studies have normally measured prenatal care as the month care is initiated. We estimate a birthweight production function using 1993 data from the US state of Texas. The results underscore the importance of correcting for estimation problems. Specifically, a model that does not control for sample selection and endogeneity overestimates the benefit of an additional visit for women who have relatively few visits. This overestimation may indicate 'positive fetal selection,' i.e., women who did not abort may have healthier babies. Also, a model that does not control for self-selection and endogenity predicts that past 17 visits, an additional visit leads to lower birthweight, while a model that corrects for these estimation problems predicts a positive effect for additional visits. This result shows the effect of mothers with less healthy fetuses making more prenatal care visits, known as 'adverse selection' in prenatal care. Copyright 2003 John Wiley & Sons, Ltd.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.

    This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea,more » Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo, Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO{trademark} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less

  17. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  18. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    PubMed

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  19. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  20. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Mechanisms behind the estimation of photosynthesis traits from leaf reflectance observations

    NASA Astrophysics Data System (ADS)

    Dechant, Benjamin; Cuntz, Matthias; Doktor, Daniel; Vohland, Michael

    2016-04-01

    Many studies have investigated the reflectance-based estimation of leaf chlorophyll, water and dry matter contents of plants. Only few studies focused on photosynthesis traits, however. The maximum potential uptake of carbon dioxide under given environmental conditions is determined mainly by RuBisCO activity, limiting carboxylation, or the speed of photosynthetic electron transport. These two main limitations are represented by the maximum carboxylation capacity, V cmax,25, and the maximum electron transport rate, Jmax,25. These traits were estimated from leaf reflectance before but the mechanisms underlying the estimation remain rather speculative. The aim of this study was therefore to reveal the mechanisms behind reflectance-based estimation of V cmax,25 and Jmax,25. Leaf reflectance, photosynthetic response curves as well as nitrogen content per area, Narea, and leaf mass per area, LMA, were measured on 37 deciduous tree species. V cmax,25 and Jmax,25 were determined from the response curves. Partial Least Squares (PLS) regression models for the two photosynthesis traits V cmax,25 and Jmax,25 as well as Narea and LMA were studied using a cross-validation approach. Analyses of linear regression models based on Narea and other leaf traits estimated via PROSPECT inversion, PLS regression coefficients and model residuals were conducted in order to reveal the mechanisms behind the reflectance-based estimation. We found that V cmax,25 and Jmax,25 can be estimated from leaf reflectance with good to moderate accuracy for a large number of species and different light conditions. The dominant mechanism behind the estimations was the strong relationship between photosynthesis traits and leaf nitrogen content. This was concluded from very strong relationships between PLS regression coefficients, the model residuals as well as the prediction performance of Narea- based linear regression models compared to PLS regression models. While the PLS regression model for V cmax,25 was fully based on the correlation to Narea, the PLS regression model for Jmax,25 was not entirely based on it. Analyses of the contributions of different parts of the reflectance spectrum revealed that the information contributing to the Jmax,25 PLS regression model in addition to the main source of information, Narea, was mainly located in the visible part of the spectrum (500-900 nm). Estimated chlorophyll content could be excluded as potential source of this extra information. The PLS regression coefficients of the Jmax,25 model indicated possible contributions from chlorophyll fluorescence and cytochrome f content. In summary, we found that the main mechanism behind the estimation of V cmax,25 and Jmax,25 from leaf reflectance observations is the correlation to Narea but that there is additional information related to Jmax,25 mainly in the visible part of the spectrum.

  2. The Educational Consequences of Teen Childbearing

    PubMed Central

    Kane, Jennifer B.; Morgan, S. Philip; Harris, Kathleen Mullan; Guilkey, David K.

    2013-01-01

    A huge literature shows that teen mothers face a variety of detriments across the life course, including truncated educational attainment. To what extent is this association causal? The estimated effects of teen motherhood on schooling vary widely, ranging from no discernible difference to 2.6 fewer years among teen mothers. The magnitude of educational consequences is therefore uncertain, despite voluminous policy and prevention efforts that rest on the assumption of a negative and presumably causal effect. This study adjudicates between two potential sources of inconsistency in the literature—methodological differences or cohort differences—by using a single, high-quality data source: namely, The National Longitudinal Study of Adolescent Health. We replicate analyses across four different statistical strategies: ordinary least squares regression; propensity score matching; and parametric and semiparametric maximum likelihood estimation. Results demonstrate educational consequences of teen childbearing, with estimated effects between 0.7 and 1.9 fewer years of schooling among teen mothers. We select our preferred estimate (0.7), derived from semiparametric maximum likelihood estimation, on the basis of weighing the strengths and limitations of each approach. Based on the range of estimated effects observed in our study, we speculate that variable statistical methods are the likely source of inconsistency in the past. We conclude by discussing implications for future research and policy, and recommend that future studies employ a similar multimethod approach to evaluate findings. PMID:24078155

  3. CFD Applications in Support of the Space Shuttle Risk Assessment

    NASA Technical Reports Server (NTRS)

    Baum, Joseph D.; Mestreau, Eric; Luo, Hong; Sharov, Dmitri; Fragola, Joseph; Loehner, Rainald; Cook, Steve (Technical Monitor)

    2000-01-01

    The paper describes a numerical study of a potential accident scenario of the space shuttle, operating at the same flight conditions as flight 51L, the Challenger accident. The interest in performing this simulation is derived by evidence that indicates that the event itself did not exert large enough blast loading on the shuttle to break it apart. Rather, the quasi-steady aerodynamic loading on the damaged, unbalance vehicle caused the break-up. Despite the enormous explosive potential of the shuttle total fuel load (both liquid and solid), the post accident explosives working group estimated the maximum energy involvement to be equivalent to about five hundreds of pounds of TNT. This understanding motivated the simulation described here. To err on the conservative side, we modeled the event as an explosion, and used the maximum energy estimate. We modeled the transient detonation of a 500 lbs spherical charge of TNT, placed at the main engine, and the resulting blast wave propagation about the complete stack. Tracking of peak pressures and impulses at hundreds of locations on the vehicle surface indicate that the blast load was insufficient to break the vehicle, hence demonstrating likely crew survivability through such an event.

  4. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  5. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  6. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  7. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.

  8. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  9. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  10. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  11. Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.

    PubMed

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).

  12. New Estimates of Land Use Intensity of Potential Bioethanol Production in the U.S.A.

    NASA Astrophysics Data System (ADS)

    Kheshgi, H. S.; Song, Y.; Torkamani, S.; Jain, A. K.

    2016-12-01

    We estimate potential bioethanol land use intensity (the inverse of potential bioethanol yield per hectare) across the United States by modeling crop yields and conversion to bioethanol (via a fermentation pathway), based on crop field studies and conversion technology analyses. We apply the process-based land surface model, the Integrated Science Assessment model (ISAM), to estimate the potential yield of four crops - corn, Miscanthus, and two variants of switchgrass (Cave-in-Rock and Alamo) - across the U.S.A. landscape for the 14-year period from 1999 through 2012, for the case with fertilizer application but without irrigation. We estimate bioethanol yield based on recent experience for corn bioethanol production from corn kernel, and current cellulosic bioethanol process design specifications under the assumption of the maximum practical harvest fraction for the energy grasses (Miscanthus and switchgrasses) and a moderate (30%) harvest fraction of corn stover. We find that each of four crops included has regions where that crop is estimated to have the lowest land use intensity (highest potential bioethanol yield per hectare). We find that minimizing potential land use intensity by including both corn and the energy grasses only improves incrementally to that of corn (using both harvested kernel and stover for bioethanol). Bioethanol land use intensity is one fundamental factor influencing the desirability of biofuels, but is not the only one; others factors include economics, competition with food production and land use, water and climate, nitrogen runoff, life-cycle emissions, and the pace of crop and technology improvement into the future.

  13. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  14. Gas dispersal potential of infant bedding of sudden death cases (II): Mathematical simulation of O2 deprivation around the face of infant mannequin model.

    PubMed

    Sakai, Jun; Takahashi, Shirushi; Funayama, Masato

    2009-04-01

    We assessed O(2) gas deprivation potential of bedding that had actually been used by 26 infants diagnosed with sudden unexpected infant death using FiCO(2) time course of baby mannequin model. All cases were the same ones in our poster paper (I). Mathematically, time-FiCO(2) (t) graphs were given as FiCO(2) (t)=C(1-e(Dt)). Here, "C" approximates the maximum FiCO(2) value, while "D" is the velocity to reach maximum FiCO(2). FiO(2) in a potential space around the mannequin's nares was estimated using a formula: FiO(2)=0.21-FiCO(2)/RQ. RQ is the respiratory quotient, and the normal human value is 0.8. The graph pattern of FiO(2) is roughly the inverse of the FiCO(2) time course. Four cases showed the bottom of estimated FiO(2) to be more than 15%, 15 were 15-6%, and the other seven were 6% or less. Considering the minimal tissue stores of O(2), changes in FiO(2) may be affected by both CO(2) production and gas movement around the infant's face. Especially, the latter seven cases may suggest the participation of the role not only of CO(2) accumulation but also of the decrease of O(2) around the face.

  15. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  16. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  17. Radiative forcing associated with particulate carbon emissions resulting from the use of mercury control technology.

    PubMed

    Lin, Guangxing; Penner, Joyce E; Clack, Herek L

    2014-09-02

    Injection of powdered activated carbon (PAC) adsorbents into the flue gas of coal fired power plants with electrostatic precipitators (ESPs) is the most mature technology to control mercury emissions for coal combustion. However, the PAC itself can penetrate ESPs to emit into the atmosphere. These emitted PACs have similar size and optical properties to submicron black carbon (BC) and thus could increase BC radiative forcing unintentionally. The present paper estimates, for the first time, the potential emission of PAC together with their climate forcing. The global average maximum potential emissions of PAC is 98.4 Gg/yr for the year 2030, arising from the assumed adoption of the maximum potential PAC injection technology, the minimum collection efficiency, and the maximum PAC injection rate. These emissions cause a global warming of 2.10 mW m(-2) at the top of atmosphere and a cooling of -2.96 mW m(-2) at the surface. This warming represents about 2% of the warming that is caused by BC from direct fossil fuel burning and 0.86% of the warming associated with CO2 emissions from coal burning in power plants. Its warming is 8 times more efficient than the emitted CO2 as measured by the 20-year-integrated radiative forcing per unit of carbon input (the 20-year Global Warming Potential).

  18. Mercury concentrations in lean fish from the Western Mediterranean Sea: Dietary exposure and risk assessment in the population of the Balearic Islands.

    PubMed

    Llull, Rosa Maria; Garí, Mercè; Canals, Miquel; Rey-Maquieira, Teresa; Grimalt, Joan O

    2017-10-01

    The present study reports total mercury (THg) and methylmercury (MeHg) concentrations in 32 different lean fish species from the Western Mediterranean Sea, with a special focus on the Balearic Islands. The concentrations of THg ranged between 0.05mg/kg ww and 3.1mg/kg ww (mean 0.41mg/kg ww). A considerable number of the most frequently fish species consumed by the Spanish population exceed the maximum levels proposed by the European legislation when they originate from the Mediterranean Sea, such as dusky grouper (100% of the examined specimens), common dentex (65%), conger (45%), common sole (38%), hake (26%) and angler (15%), among others. The estimated weekly intakes (EWI) in children (7-12 years of age) and adults from the Spanish population (2.7µg/kg bw and 2.1µg/kg bw, respectively) for population only consuming Mediterranean fish were below the provisional tolerable weekly intake (PTWI) of THg established by EFSA in 2012, 4µg/kg bw. However, the equivalent estimations for methylmercury, involving PTWI of 1.3µg/kg bw, were two times higher in children and above 50% in adults. For hake, sole, angler and dusky grouper, the most frequently consumed fish, the estimated weekly intakes in both children and adults were below the maximum levels accepted. These intakes correspond to maximum potential estimations because fish from non-Mediterranean origin is often consumed by the Spanish population including the one from the Balearic Islands. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  20. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  1. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  2. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant.The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the web application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted annual maximum peak discharge records by streamgage are provided at https://doi.org/10.3133/sir20165050 for download.

  3. Environmental consequences of postulated plutonium releases from General Electric Company Vallecitos Nuclear Center, Vallecitos, California, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, J.D.; Watson, E.C.

    1980-11-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated plutonium releases caused by severe natural phenomena at the General Electric Company Vallecitos Nuclear Center, Vallecitos, California. The severe natural phenomena considered are earthquakes, tornadoes, and high straight-line winds. Maximum plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely 50-year committed dose equivalents are given for the maximum-exposed individual and the population within a 50-mile radius of the plant. The maximum plutonium deposition values likely to occur offsite are also given. The most likelymore » calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays. The most likely maximum residual plutonium contamination estimated to be deposited offsite following the earthquakes, and the 180-mph and 230-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the 135-mph tornado are below the EPA proposed guidelines.« less

  4. Assessing the impact of climate and land use changes on extreme floods in a large tropical catchment

    NASA Astrophysics Data System (ADS)

    Jothityangkoon, Chatchai; Hirunteeyakul, Chow; Boonrawd, Kowit; Sivapalan, Murugesu

    2013-05-01

    In the wake of the recent catastrophic floods in Thailand, there is considerable concern about the safety of large dams designed and built some 50 years ago. In this paper a distributed rainfall-runoff model appropriate for extreme flood conditions is used to generate revised estimates of the Probable Maximum Flood (PMF) for the Upper Ping River catchment (area 26,386 km2) in northern Thailand, upstream of location of the large Bhumipol Dam. The model has two components: a continuous water balance model based on a configuration of parameters estimated from climate, soil and vegetation data and a distributed flood routing model based on non-linear storage-discharge relationships of the river network under extreme flood conditions. The model is implemented under several alternative scenarios regarding the Probable Maximum Precipitation (PMP) estimates and is also used to estimate the potential effects of both climate change and land use and land cover changes on the extreme floods. These new estimates are compared against estimates using other hydrological models, including the application of the original prediction methods under current conditions. Model simulations and sensitivity analyses indicate that a reasonable Probable Maximum Flood (PMF) at the dam site is 6311 m3/s, which is only slightly higher than the original design flood of 6000 m3/s. As part of an uncertainty assessment, the estimated PMF is sensitive to the design method, input PMP, land use changes and the floodplain inundation effect. The increase of PMP depth by 5% can cause a 7.5% increase in PMF. Deforestation by 10%, 20%, 30% can result in PMF increases of 3.1%, 6.2%, 9.2%, respectively. The modest increase of the estimated PMF (to just 6311 m3/s) in spite of these changes is due to the factoring of the hydraulic effects of trees and buildings on the floodplain as the flood situation changes from normal floods to extreme floods, when over-bank flows may be the dominant flooding process, leading to a substantial reduction in the PMF estimates.

  5. An investigation into the potential of low head hydro power in Northern Ireland for the production of electricity

    NASA Astrophysics Data System (ADS)

    Redpath, David; Ward, Michael J.

    2017-07-01

    The maximum exploitable potential for low head hydroelectric sites (gross head≤10 m) in Northern Ireland (NI) was determined as 12.07 MW using a simple payback analysis for 304 potential sites investigated to derive a classification scheme in terms of economic viability. A techno-economic analysis with validated numerical models from previous research estimated the capital investment required for the development of a hydroelectric plant, using the low head Michell-Banki cross flow turbine, for the 304 sites investigated. The number of potentially viable sites in NI for low head hydro ranged from 198 to 286 with an estimated installed capacity ranging from 11.95 to 12.05 MW. Sites with a limited installed capacity were not economically viable unless increased government support in the form of longer term (25-50 years) low interest loans as well as the current (Renewables Obligations Certificates) Renewables Obligation Certificates scheme is provided and sustained.

  6. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  7. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  8. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  9. Individual stem value recovery of modified and conventional tree-length systems in the southeastern United States

    Treesearch

    Amanda H. Lang; Shawn A. Baker; W. Dale Greene; Glen E. Murphy

    2010-01-01

    We compared value recovery of a modified treelength (MTL) logging system that measures product diameter and length using a Waratah 626 harvester head to that of a treelength (TL) system that estimates dimensions. A field test compared the actual value cut to the maximum potential value suggested by the log bucking optimization program Assessment of Value by Individual...

  10. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  11. Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas

    NASA Astrophysics Data System (ADS)

    England, J. F.; Caldwell, R. J.; Sankovich, V.

    2011-12-01

    Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for short-duration, 6-hour storms. Utilizing new methods and data, however, requires careful consideration of the potential limitations and caveats associated with the analysis and further evaluation of the newer storms within the context of historical storms from HMR51. Here, we provide a brief background on extreme rainfall in the Carolinas, along with an overview of the methods employed for converting MPR to depth-area relationships. Discussion of the issues and limitations, evaluation of the various techniques, and comparison to HMR51 storms and PMP values are also presented.

  12. Rib biomechanical properties exhibit diagnostic potential for accurate ageing in forensic investigations

    PubMed Central

    Bonicelli, Andrea; Xhemali, Bledar; Kranioti, Elena F.

    2017-01-01

    Age estimation remains one of the most challenging tasks in forensic practice when establishing a biological profile of unknown skeletonised remains. Morphological methods based on developmental markers of bones can provide accurate age estimates at a young age, but become highly unreliable for ages over 35 when all developmental markers disappear. This study explores the changes in the biomechanical properties of bone tissue and matrix, which continue to change with age even after skeletal maturity, and their potential value for age estimation. As a proof of concept we investigated the relationship of 28 variables at the macroscopic and microscopic level in rib autopsy samples from 24 individuals. Stepwise regression analysis produced a number of equations one of which with seven variables showed an R2 = 0.949; a mean residual error of 2.13 yrs ±0.4 (SD) and a maximum residual error value of 2.88 yrs. For forensic purposes, by using only bench top machines in tests which can be carried out within 36 hrs, a set of just 3 variables produced an equation with an R2 = 0.902 a mean residual error of 3.38 yrs ±2.6 (SD) and a maximum observed residual error 9.26yrs. This method outstrips all existing age-at-death methods based on ribs, thus providing a novel lab based accurate tool in the forensic investigation of human remains. The present application is optimised for fresh (uncompromised by taphonomic conditions) remains, but the potential of the principle and method is vast once the trends of the biomechanical variables are established for other environmental conditions and circumstances. PMID:28520764

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  14. Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.

    2017-02-08

    Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less

  15. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    PubMed Central

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-01-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005–2007. PMID:21776223

  16. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  17. On implementing maximum economic yield in commercial fisheries

    PubMed Central

    Dichmont, C. M.; Pascoe, S.; Kompas, T.; Punt, A. E.; Deng, R.

    2009-01-01

    Economists have long argued that a fishery that maximizes its economic potential usually will also satisfy its conservation objectives. Recently, maximum economic yield (MEY) has been identified as a primary management objective for Australian fisheries and is under consideration elsewhere. However, first attempts at estimating MEY as an actual management target for a real fishery (rather than a conceptual or theoretical exercise) have highlighted some substantial complexities generally unconsidered by fisheries economists. Here, we highlight some of the main issues encountered in our experience and their implications for estimating and transitioning to MEY. Using a bioeconomic model of an Australian fishery for which MEY is the management target, we note that unconstrained optimization may result in effort trajectories that would not be acceptable to industry or managers. Different assumptions regarding appropriate constraints result in different outcomes, each of which may be considered a valid MEY. Similarly, alternative treatments of prices and costs may result in differing estimates of MEY and their associated effort trajectories. To develop an implementable management strategy in an adaptive management framework, a set of assumptions must be agreed among scientists, economists, and industry and managers, indicating that operationalizing MEY is not simply a matter of estimating the numbers but requires strong industry commitment and involvement. PMID:20018676

  18. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    PubMed

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  19. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    NASA Astrophysics Data System (ADS)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  20. Comparison of the A-Cc curve fitting methods in determining maximum ribulose 1.5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration.

    PubMed

    Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei

    2009-02-01

    A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.

  1. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  2. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  3. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  4. Intracranial EEG potentials estimated from MEG sources: A new approach to correlate MEG and iEEG data in epilepsy.

    PubMed

    Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane

    2016-05-01

    Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  6. A digital simulation of the glacial-aquifer system in the northern three-fourths of Brown County, South Dakota

    USGS Publications Warehouse

    Emmons, P.J.

    1990-01-01

    A digital model was developed to simulate groundwater flow in a complex glacial-aquifer system that includes the Elm, Middle James, and Deep James aquifers in South Dakota. The average thickness of the aquifers ranges from 16 to 32 ft and the average hydraulic conductivity ranges from 240 to 300 ft/day. The maximum steady-state recharge to the aquifer system was estimated to be 7.0 in./yr, and the maximum potential steady- state evapotranspiration was estimated to be 35.4 in/yr. Maximum monthly recharge for 1985 ranged from zero in the winter to 2.5 in in May. The potential monthly evapotranspiration for 1985 ranged from zero in the winter to 7.0 in in July. The average difference between the simulated and observed water levels from steady-state conditions (pre-1983) was 0. 78 ft and the average absolute difference was 4.59 ft for aquifer layer 1 (the Elm aquifer) from 22 observation wells and 3.49 ft and 5.10 ft, respectively, for aquifer layer 2 (the Middle James aquifer) from 13 observation wells. The average difference between the simulated and observed water levels from simulated monthly potentiometric heads for 1985 in aquifer layer 1 ranged from -2.54 ft in July to 0.59 ft in May and in aquifer layer 2 ranged from -1.22 ft in April to 4.98 ft in November. Sensitivity analysis of the steady-state model indicates that it is most sensitive to changes in recharge and least sensitive to changes in hydraulic conductivity. (USGS)

  7. Soil water content and evaporation determined by thermal parameters obtained from ground-based and remote measurements

    NASA Technical Reports Server (NTRS)

    Reginato, R. J.; Idso, S. B.; Jackson, R. D.; Vedder, J. F.; Blanchard, M. B.; Goettelman, R.

    1976-01-01

    Soil water contents from both smooth and rough bare soil were estimated from remotely sensed surface soil and air temperatures. An inverse relationship between two thermal parameters and gravimetric soil water content was found for Avondale loam when its water content was between air-dry and field capacity. These parameters, daily maximum minus minimum surface soil temperature and daily maximum soil minus air temperature, appear to describe the relationship reasonably well. These two parameters also describe relative soil water evaporation (actual/potential). Surface soil temperatures showed good agreement among three measurement techniques: in situ thermocouples, a ground-based infrared radiation thermometer, and the thermal infrared band of an airborne multispectral scanner.

  8. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  9. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  10. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  11. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  12. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  13. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Heavy metals in vegetables and respective soils irrigated by canal, municipal waste and tube well waters.

    PubMed

    Ismail, Amir; Riaz, Muhammad; Akhtar, Saeed; Ismail, Tariq; Amir, Mamoona; Zafar-ul-Hye, Muhammad

    2014-01-01

    Heavy metal contamination in the food chain is of serious concern due to the potential risks involved. The results of this study revealed the presence of maximum concentration of heavy metals in the canal followed by sewerage and tube well water. Similarly, the vegetables and respective soils irrigated with canal water were found to have higher heavy metal contamination followed by sewerage- and tube-well-watered samples. However, the heavy metal content of vegetables under study was below the limits as set by FAO/WHO, except for lead in canal-water-irrigated spinach (0.59 mg kg(-1)), radish pods (0.44 mg kg(-1)) and bitter gourd (0.33 mg kg(-1)). Estimated daily intakes of heavy metals by the consumption of selected vegetables were found to be well below the maximum limits. However, a complete estimation of daily intake requires the inclusion of other dietary and non-dietary exposure sources of heavy metals.

  15. Value for money: protecting endangered species on Danish heathland.

    PubMed

    Strange, Niels; Jacobsen, Jette B; Thorsen, Bo J; Tarp, Peter

    2007-11-01

    Biodiversity policies in the European Union (EU) are mainly implemented through the Birds and Habitats Directives as well as the establishment of Natura 2000, a network of protected areas throughout the EU. Considerable resources must be allocated for fulfilling the Directives and the question of optimal allocation is as important as it is difficult. In general, economic evaluations of conservation targets at most consider the costs and seldom the welfare economic benefits. In the present study, we use welfare economic benefit estimates concerning the willingness-to-pay for preserving endangered species and for the aggregate area of heathland preserved in Denmark. Similarly, we obtain estimates of the welfare economic cost of habitat restoration and maintenance. Combining these welfare economic measures with expected species coverage, we are able to estimate the potential welfare economic contribution of a conservation network. We compare three simple nonprobabilistic strategies likely to be used in day-to-day policy implementation: i) a maximum selected area strategy, ii) a hotspot selection strategy, and iii) a minimizing cost strategy, and two more advanced and informed probabilistic strategies: i) a maximum expected coverage strategy and ii) a strategy for maximum expected welfare economic gain. We show that the welfare economic performance of the strategies differ considerably. The comparison between the expected coverage and expected welfare shows that for the case considered, one may identify an optimal protection level above which additional coverage only comes at increasing welfare economic loss.

  16. Implications of Extreme Life Span in Clonal Organisms: Millenary Clones in Meadows of the Threatened Seagrass Posidonia oceanica

    PubMed Central

    Arnaud-Haond, Sophie; Duarte, Carlos M.; Diaz-Almela, Elena; Marbà, Núria; Sintes, Tomas; Serrão, Ester A.

    2012-01-01

    The maximum size and age that clonal organisms can reach remains poorly known, although we do know that the largest natural clones can extend over hundreds or thousands of metres and potentially live for centuries. We made a review of findings to date, which reveal that the maximum clone age and size estimates reported in the literature are typically limited by the scale of sampling, and may grossly underestimate the maximum age and size of clonal organisms. A case study presented here shows the occurrence of clones of slow-growing marine angiosperm Posidonia oceanica at spatial scales ranging from metres to hundreds of kilometres, using microsatellites on 1544 sampling units from a total of 40 locations across the Mediterranean Sea. This analysis revealed the presence, with a prevalence of 3.5 to 8.9%, of very large clones spreading over one to several (up to 15) kilometres at the different locations. Using estimates from field studies and models of the clonal growth of P. oceanica, we estimated these large clones to be hundreds to thousands of years old, suggesting the evolution of general purpose genotypes with large phenotypic plasticity in this species. These results, obtained combining genetics, demography and model-based calculations, question present knowledge and understanding of the spreading capacity and life span of plant clones. These findings call for further research on these life history traits associated with clonality, considering their possible ecological and evolutionary implications. PMID:22312426

  17. A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.

    ERIC Educational Resources Information Center

    Wood, Terry M.; Safrit, Margaret J.

    1987-01-01

    A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…

  18. Theoretical Calculations on the Feasibility of Microalgal Biofuels: Utilization of Marine Resources Could Help Realizing the Potential of Microalgae

    PubMed Central

    Park, Hanwool

    2016-01-01

    Abstract Microalgae have long been considered as one of most promising feedstocks with better characteristics for biofuels production over conventional energy crops. There have been a wide range of estimations on the feasibility of microalgal biofuels based on various productivity assumptions and data from different scales. The theoretical maximum algal biofuel productivity, however, can be calculated by the amount of solar irradiance and photosynthetic efficiency (PE), assuming other conditions are within the optimal range. Using the actual surface solar irradiance data around the world and PE of algal culture systems, maximum algal biomass and biofuel productivities were calculated, and feasibility of algal biofuel were assessed with the estimation. The results revealed that biofuel production would not easily meet the economic break‐even point and may not be sustainable at a large‐scale with the current algal biotechnology. Substantial reductions in the production cost, improvements in lipid productivity, recycling of resources, and utilization of non‐conventional resources will be necessary for feasible mass production of algal biofuel. Among the emerging technologies, cultivation of microalgae in the ocean shows great potentials to meet the resource requirements and economic feasibility in algal biofuel production by utilizing various marine resources. PMID:27782372

  19. A model for predicting Xanthomonas arboricola pv. pruni growth as a function of temperature

    PubMed Central

    Llorente, Isidre; Montesinos, Emilio; Moragrega, Concepció

    2017-01-01

    A two-step modeling approach was used for predicting the effect of temperature on the growth of Xanthomonas arboricola pv. pruni, causal agent of bacterial spot disease of stone fruit. The in vitro growth of seven strains was monitored at temperatures from 5 to 35°C with a Bioscreen C system, and a calibrating equation was generated for converting optical densities to viable counts. In primary modeling, Baranyi, Buchanan, and modified Gompertz equations were fitted to viable count growth curves over the entire temperature range. The modified Gompertz model showed the best fit to the data, and it was selected to estimate the bacterial growth parameters at each temperature. Secondary modeling of maximum specific growth rate as a function of temperature was performed by using the Ratkowsky model and its variations. The modified Ratkowsky model showed the best goodness of fit to maximum specific growth rate estimates, and it was validated successfully for the seven strains at four additional temperatures. The model generated in this work will be used for predicting temperature-based Xanthomonas arboricola pv. pruni growth rate and derived potential daily doublings, and included as the inoculum potential component of a bacterial spot of stone fruit disease forecaster. PMID:28493954

  20. Theoretical Calculations on the Feasibility of Microalgal Biofuels: Utilization of Marine Resources Could Help Realizing the Potential of Microalgae.

    PubMed

    Park, Hanwool; Lee, Choul-Gyun

    2016-11-01

    Microalgae have long been considered as one of most promising feedstocks with better characteristics for biofuels production over conventional energy crops. There have been a wide range of estimations on the feasibility of microalgal biofuels based on various productivity assumptions and data from different scales. The theoretical maximum algal biofuel productivity, however, can be calculated by the amount of solar irradiance and photosynthetic efficiency (PE), assuming other conditions are within the optimal range. Using the actual surface solar irradiance data around the world and PE of algal culture systems, maximum algal biomass and biofuel productivities were calculated, and feasibility of algal biofuel were assessed with the estimation. The results revealed that biofuel production would not easily meet the economic break-even point and may not be sustainable at a large-scale with the current algal biotechnology. Substantial reductions in the production cost, improvements in lipid productivity, recycling of resources, and utilization of non-conventional resources will be necessary for feasible mass production of algal biofuel. Among the emerging technologies, cultivation of microalgae in the ocean shows great potentials to meet the resource requirements and economic feasibility in algal biofuel production by utilizing various marine resources. © 2016 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  2. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  3. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  4. Uncertainty in flood damage estimates and its potential effect on investment decisions

    NASA Astrophysics Data System (ADS)

    Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.

    2016-01-01

    This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.

  5. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package

    PubMed Central

    Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.

    2015-01-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. Pathogen reduction co-benefits of nutrient best management practices

    PubMed Central

    Wainger, Lisa A.; Barber, Mary C.

    2016-01-01

    Background Many of the practices currently underway to reduce nitrogen, phosphorus, and sediment loads entering the Chesapeake Bay have also been observed to support reduction of disease-causing pathogen loadings. We quantify how implementation of these practices, proposed to meet the nutrient and sediment caps prescribed by the Total Maximum Daily Load (TMDL), could reduce pathogen loadings and provide public health co-benefits within the Chesapeake Bay system. Methods We used published data on the pathogen reduction potential of management practices and baseline fecal coliform loadings estimated as part of prior modeling to estimate the reduction in pathogen loadings to the mainstem Potomac River and Chesapeake Bay attributable to practices implemented as part of the TMDL. We then compare the estimates with the baseline loadings of fecal coliform loadings to estimate the total pathogen reduction potential of the TMDL. Results We estimate that the TMDL practices have the potential to decrease disease-causing pathogen loads from all point and non-point sources to the mainstem Potomac River and the entire Chesapeake Bay watershed by 19% and 27%, respectively. These numbers are likely to be underestimates due to data limitations that forced us to omit some practices from analysis. Discussion Based on known impairments and disease incidence rates, we conclude that efforts to reduce nutrients may create substantial health co-benefits by improving the safety of water-contact recreation and seafood consumption. PMID:27904807

  8. Pathogen reduction co-benefits of nutrient best management practices.

    PubMed

    Richkus, Jennifer; Wainger, Lisa A; Barber, Mary C

    2016-01-01

    Many of the practices currently underway to reduce nitrogen, phosphorus, and sediment loads entering the Chesapeake Bay have also been observed to support reduction of disease-causing pathogen loadings. We quantify how implementation of these practices, proposed to meet the nutrient and sediment caps prescribed by the Total Maximum Daily Load (TMDL), could reduce pathogen loadings and provide public health co-benefits within the Chesapeake Bay system. We used published data on the pathogen reduction potential of management practices and baseline fecal coliform loadings estimated as part of prior modeling to estimate the reduction in pathogen loadings to the mainstem Potomac River and Chesapeake Bay attributable to practices implemented as part of the TMDL. We then compare the estimates with the baseline loadings of fecal coliform loadings to estimate the total pathogen reduction potential of the TMDL. We estimate that the TMDL practices have the potential to decrease disease-causing pathogen loads from all point and non-point sources to the mainstem Potomac River and the entire Chesapeake Bay watershed by 19% and 27%, respectively. These numbers are likely to be underestimates due to data limitations that forced us to omit some practices from analysis. Based on known impairments and disease incidence rates, we conclude that efforts to reduce nutrients may create substantial health co-benefits by improving the safety of water-contact recreation and seafood consumption.

  9. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  10. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  11. An estimate of the noise shielding on the fuselage resulting from installing a short duct around an advanced propeller

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.

    1988-01-01

    A simple barrier shielding model was used to estimate the amount of noise shielding on the fuselage that could result from installing a short duct around a wing-mounted advanced propeller. With the propeller located one-third of the duct length from the inlet, estimates for the maximum blade passing tone attenuation varied from 7 dB for a duct 0.25 propeller diameter long to 16.75 dB for a duct 1 diameter long. Attenuations for the higher harmonics would be even larger because of their shorter wavelengths relative to the duct length. These estimates show that the fuselage noise reduction potential of a ducted compared with an unducted propeller is significant. Even more reduction might occur if acoustic attenuation material were installed in the duct.

  12. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  13. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  14. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  15. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  16. Estimating Long-Term Survival Temperatures at the Assemblage Level in the Marine Environment: Towards Macrophysiology

    PubMed Central

    Richard, Joëlle; Morley, Simon Anthony; Thorne, Michael A. S.; Peck, Lloyd Samuel

    2012-01-01

    Defining ecologically relevant upper temperature limits of species is important in the context of environmental change. The approach used in the present paper estimates the relationship between rates of temperature change and upper temperature limits for survival in order to evaluate the maximum long-term survival temperature (Ts). This new approach integrates both the exposure time and the exposure temperature in the evaluation of temperature limits. Using data previously published for different temperate and Antarctic marine environments, we calculated Ts in each environment, which allowed us to calculate a new index: the Warming Allowance (WA). This index is defined as the maximum environmental temperature increase which an ectotherm in a given environment can tolerate, possibly with a decrease in performance but without endangering survival over seasonal or lifetime time-scales. It is calculated as the difference between maximum long-term survival temperature (Ts) and mean maximum habitat temperature. It provides a measure of how close a species, assemblage or fauna are living to their temperature limits for long-term survival and hence their vulnerability to environmental warming. In contrast to data for terrestrial environments showing that warming tolerance increases with latitude, results here for marine environments show a less clear pattern as the smallest WA value was for the Peru upwelling system. The method applied here, relating upper temperature limits to rate of experimental warming, has potential for wide application in the identification of faunas with little capacity to survive environmental warming. PMID:22509340

  17. Real-time tsunami inundation forecasting and damage mapping towards enhancing tsunami disaster resiliency

    NASA Astrophysics Data System (ADS)

    Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.

    2014-12-01

    With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.

  18. A mesic maximum in biological water use demarcates biome sensitivity to aridity shifts.

    PubMed

    Good, Stephen P; Moore, Georgianne W; Miralles, Diego G

    2017-12-01

    Biome function is largely governed by how efficiently available resources can be used and yet for water, the ratio of direct biological resource use (transpiration, E T ) to total supply (annual precipitation, P) at ecosystem scales remains poorly characterized. Here, we synthesize field, remote sensing and ecohydrological modelling estimates to show that the biological water use fraction (E T /P) reaches a maximum under mesic conditions; that is, when evaporative demand (potential evapotranspiration, E P ) slightly exceeds supplied precipitation. We estimate that this mesic maximum in E T /P occurs at an aridity index (defined as E P /P) between 1.3 and 1.9. The observed global average aridity of 1.8 falls within this range, suggesting that the biosphere is, on average, configured to transpire the largest possible fraction of global precipitation for the current climate. A unimodal E T /P distribution indicates that both dry regions subjected to increasing aridity and humid regions subjected to decreasing aridity will suffer declines in the fraction of precipitation that plants transpire for growth and metabolism. Given the uncertainties in the prediction of future biogeography, this framework provides a clear and concise determination of ecosystems' sensitivity to climatic shifts, as well as expected patterns in the amount of precipitation that ecosystems can effectively use.

  19. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  20. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  1. Measurement and dynamics of the spatial distribution of an electron localized at a metal-dielectric interface

    NASA Astrophysics Data System (ADS)

    Bezel, Ilya; Gaffney, Kelly J.; Garrett-Roe, Sean; Liu, Simon H.; Miller, André D.; Szymanski, Paul; Harris, Charles B.

    2004-01-01

    The ability of time- and angle-resolved two-photon photoemission to estimate the size distribution of electron localization in the plane of a metal-adsorbate interface is discussed. It is shown that the width of angular distribution of the photoelectric current is inversely proportional to the electron localization size within the most common approximations in the description of image potential states. The localization of the n=1 image potential state for two monolayers of butyronitrile on Ag(111) is used as an example. For the delocalized n=1 state, the shape of the signal amplitude as a function of momentum parallel to the surface changes rapidly with time, indicating efficient intraband relaxation on a 100 fs time scale. For the localized state, little change was observed. The latter is related to the constant size distribution of electron localization, which is estimated to be a Gaussian with a 15±4 Å full width at half maximum in the plane of the interface. A simple model was used to study the effect of a weak localization potential on the overall width of the angular distribution of the photoemitted electrons, which exhibited little sensitivity to the details of the potential. This substantiates the validity of the localization size estimate.

  2. Pre-incubation with cyclosporine A potentiates its inhibitory effects on pitavastatin uptake mediated by recombinantly expressed cynomolgus monkey hepatic organic anion transporting polypeptide.

    PubMed

    Takahashi, Tsuyoshi; Ohtsuka, Tatsuyuki; Uno, Yasuhiro; Utoh, Masahiro; Yamazaki, Hiroshi; Kume, Toshiyuki

    2016-11-01

    Cyclosporine A, an inhibitor of hepatic organic anion transporting polypeptides (OATPs), reportedly increased plasma concentrations of probe substrates, although its maximum unbound blood concentrations were lower than the experimental half-maximal inhibitory (IC 50 ) concentrations. Pre-incubation with cyclosporine A in vitro before simultaneous incubation with probes has been reported to potentiate its inhibitory effects on recombinant human OATP-mediated probe uptake. In the present study, the effects of cyclosporine A and rifampicin on recombinant cynomolgus monkey OATP-mediated pitavastatin uptake were investigated in pre- and simultaneous incubation systems. Pre-incubation with cyclosporine A, but not with rifampicin, decreased the apparent IC 50 values on recombinant cynomolgus monkey OATP1B1- and OATP1B3-mediated pitavastatin uptake. Application of the co-incubated IC 50 values toward R values (1 + [unbound inhibitor] inlet to the liver, theoretically maximum /inhibition constant) in static models, 1.1 in monkeys and 1.3 in humans, for recombinant cynomolgus monkey and human OATP1B1-mediated pitavastatin uptake might result in the poor prediction of drug interaction magnitudes. In contrast, the lowered IC 50 values after pre-incubation with cyclosporine A provided better prediction with R values of 3.9 for monkeys and 2.7 for humans when the estimated maximum cyclosporine A concentrations at the inlet to the liver were used. These results suggest that the enhanced inhibitory potential of perpetrator medicines by pre-incubation on cynomolgus monkey OATP-mediated pitavastatin uptake in vitro could be of value for the precise estimation of drug interaction magnitudes in silico, in accordance with the findings from pre-administration of inhibitors on pitavastatin pharmacokinetics validated in monkeys. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Borophane as a Benchmate of Graphene: A Potential 2D Material for Anode of Li and Na-Ion Batteries.

    PubMed

    Jena, Naresh K; Araujo, Rafael B; Shukla, Vivekanand; Ahuja, Rajeev

    2017-05-17

    Borophene, single atomic-layer sheet of boron ( Science 2015 , 350 , 1513 ), is a rather new entrant into the burgeoning class of 2D materials. Borophene exhibits anisotropic metallic properties whereas its hydrogenated counterpart borophane is reported to be a gapless Dirac material lying on the same bench with the celebrated graphene. Interestingly, this transition of borophane also rendered stability to it considering the fact that borophene was synthesized under ultrahigh vacuum conditions on a metallic (Ag) substrate. On the basis of first-principles density functional theory computations, we have investigated the possibilities of borophane as a potential Li/Na-ion battery anode material. We obtained a binding energy of -2.58 (-1.08 eV) eV for Li (Na)-adatom on borophane and Bader charge analysis revealed that Li(Na) atom exists in Li + (Na + ) state. Further, on binding with Li/Na, borophane exhibited metallic properties as evidenced by the electronic band structure. We found that diffusion pathways for Li/Na on the borophane surface are anisotropic with x direction being the favorable one with a barrier of 0.27 and 0.09 eV, respectively. While assessing the Li-ion anode performance, we estimated that the maximum Li content is Li 0.445 B 2 H 2 , which gives rises to a material with a maximum theoretical specific capacity of 504 mAh/g together with an average voltage of 0.43 V versus Li/Li + . Likewise, for Na-ion the maximum theoretical capacity and average voltage were estimated to be 504 mAh/g and 0.03 V versus Na/Na + , respectively. These findings unambiguously suggest that borophane can be a potential addition to the map of Li and Na-ion anode materials and can rival some of the recently reported 2D materials including graphene.

  4. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  5. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  6. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  7. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  8. An Audit of Repeat Testing at an Academic Medical Center: Consistency of Order Patterns With Recommendations and Potential Cost Savings.

    PubMed

    Hueth, Kyle D; Jackson, Brian R; Schmidt, Robert L

    2018-05-31

    To evaluate the prevalence of potentially unnecessary repeat testing (PURT) and the associated economic burden for an inpatient population at a large academic medical facility. We evaluated all inpatient test orders during 2016 for PURT by comparing the intertest times to published recommendations. Potential cost savings were estimated using the Centers for Medicare & Medicaid Services maximum allowable reimbursement rate. We evaluated result positivity as a determinant of PURT through logistic regression. Of the evaluated 4,242 repeated target tests, 1,849 (44%) were identified as PURT, representing an estimated cost-savings opportunity of $37,376. Collectively, the association of result positivity and PURT was statistically significant (relative risk, 1.2; 95% confidence interval, 1.1-1.3; P < .001). PURT contributes to unnecessary health care costs. We found that a small percentage of providers account for the majority of PURT, and PURT is positively associated with result positivity.

  9. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  10. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  11. Applications of flood depth from rapid post-event footprint generation

    NASA Astrophysics Data System (ADS)

    Booth, Naomi; Millinship, Ian

    2015-04-01

    Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.

  12. Allowable levels of take for the trade in Nearctic songbirds.

    PubMed

    Johnson, Fred A; Walters, Matthew A H; Boomer, G Scott

    2012-06-01

    The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making-periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (> or = 5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take of Nearctic songbirds in other Latin American and Caribbean countries ultimately must be considered in assessing population-level impacts.

  13. Allowable levels of take for the trade in Nearctic songbirds

    USGS Publications Warehouse

    Johnson, Fred A.; Walters, Matthew A.H.; Boomer, G. Scott

    2012-01-01

    The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (≥5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take of Nearctic songbirds in other Latin American and Caribbean countries ultimately must be considered in assessing population-level impacts.

  14. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  15. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.

    PubMed

    Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne

    2016-04-01

    We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.

  16. Potential for reduced methane and carbon dioxide emissions from livestock and pasture management in the tropics

    PubMed Central

    Thornton, Philip K.; Herrero, Mario

    2010-01-01

    We estimate the potential reductions in methane and carbon dioxide emissions from several livestock and pasture management options in the mixed and rangeland-based production systems in the tropics. The impacts of adoption of improved pastures, intensifying ruminant diets, changes in land-use practices, and changing breeds of large ruminants on the production of methane and carbon dioxide are calculated for two levels of adoption: complete adoption, to estimate the upper limit to reductions in these greenhouse gases (GHGs), and optimistic but plausible adoption rates taken from the literature, where these exist. Results are expressed both in GHG per ton of livestock product and in Gt CO2-eq. We estimate that the maximum mitigation potential of these options in the land-based livestock systems in the tropics amounts to approximately 7% of the global agricultural mitigation potential to 2030. Using historical adoption rates from the literature, the plausible mitigation potential of these options could contribute approximately 4% of global agricultural GHG mitigation. This could be worth on the order of $1.3 billion per year at a price of $20 per t CO2-eq. The household-level and sociocultural impacts of some of these options warrant further study, however, because livestock have multiple roles in tropical systems that often go far beyond their productive utility. PMID:20823225

  17. Hazardous air pollutants in industrial area of Mumbai - India.

    PubMed

    Srivastava, Anjali; Som, Dipanjali

    2007-09-01

    Hazardous Air Pollutants (HAPs) have a potential to be distributed into different component of environment with varying persistence. In the current study fourteen HAPs have been quantified in the air using TO-17 method in an industrial area of Mumbai. The distribution of these HAPs in different environmental compartments have been calculated using multi media mass balance model, TaPL3, along with long range transport potential and persistence. Results show that most of the target compounds partition mostly in air. Phenol and trifluralin, partition predominantly into soil while ethyl benzene and xylene partition predominantly into vegetation compartment. Naphthalene has the highest persistence followed by ethyl benzene, xylene and 1,1,1 trihloro ethane. Long range transport potential is maximum for 1,1,1 trichloroethane. Assessment of human health risk in terms of non-carcinogenic hazard and carcinogenic risk due to exposure to HAPs. have been estimated for industrial workers and residents in the study area considering all possible exposure routes using the output from TaPL3 model. The overall carcinogenic risk for residents and workers are estimated as high as unity along with very high hazard potential.

  18. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  19. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  20. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE PAGES

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...

    2017-08-25

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  1. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  2. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  3. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  4. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  6. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  7. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  8. Review of Seismic Hazard Issues Associated with Auburn Dam Project, Sierra Nevada Foothills, California

    USGS Publications Warehouse

    Schwartz, D.P.; Joyner, W.B.; Stein, R.S.; Brown, R.D.; McGarr, A.F.; Hickman, S.H.; Bakun, W.H.

    1996-01-01

    Summary -- The U.S. Geological Survey was requested by the U.S. Department of the Interior to review the design values and the issue of reservoir-induced seismicity for a concrete gravity dam near the site of the previously-proposed Auburn Dam in the western foothills of the Sierra Nevada, central California. The dam is being planned as a flood-control-only dam with the possibility of conversion to a permanent water-storage facility. As a basis for planning studies the U.S. Army Corps of Engineers is using the same design values approved by the Secretary of the Interior in 1979 for the original Auburn Dam. These values were a maximum displacement of 9 inches on a fault intersecting the dam foundation, a maximum earthquake at the site of magnitude 6.5, a peak horizontal acceleration of 0.64 g, and a peak vertical acceleration of 0.39 g. In light of geological and seismological investigations conducted in the western Sierran foothills since 1979 and advances in the understanding of how earthquakes are caused and how faults behave, we have developed the following conclusions and recommendations: Maximum Displacement. Neither the pre-1979 nor the recent observations of faults in the Sierran foothills precisely define the maximum displacement per event on a fault intersecting the dam foundation. Available field data and our current understanding of surface faulting indicate a range of values for the maximum displacement. This may require the consideration of a design value larger than 9 inches. We recommend reevaluation of the design displacement using current seismic hazard methods that incorporate uncertainty into the estimate of this design value. Maximum Earthquake Magnitude. There are no data to indicate that a significant change is necessary in the use of an M 6.5 maximum earthquake to estimate design ground motions at the dam site. However, there is a basis for estimating a range of maximum magnitudes using recent field information and new statistical fault relations. We recommend reevaluating the maximum earthquake magnitude using current seismic hazard methodology. Design Ground Motions. A large number of strong-motion records have been acquired and significant advances in understanding of ground motion have been achieved since the original evaluations. The design value for peak horizontal acceleration (0.64 g) is larger than the median of one recent study and smaller than the median value of another. The value for peak vertical acceleration (0.39 g) is somewhat smaller than median values of two recent studies. We recommend a reevaluation of the design ground motions that takes into account new ground motion data with particular attention to rock sites at small source distances. Reservoir-Induced Seismicity. The potential for reservoir-induced seismicity must be considered for the Auburn Darn project. A reservoir-induced earthquake is not expected to be larger than the maximum naturally occurring earthquake. However, the probability of an earthquake may be enhanced by reservoir impoundment. A flood-control-only project may involve a lower probability of significant induced seismicity than a multipurpose water-storage dam. There is a need to better understand and quantify the likelihood of this hazard. A methodology should be developed to quantify the potential for reservoir induced seismicity using seismicity data from the Sierran foothills, new worldwide observations of induced and triggered seismicity, and current understanding of the earthquake process. Reevaluation of Design Parameters. The reevaluation of the maximum displacement, maximum magnitude earthquake, and design ground motions can be made using available field observations from the Sierran foothills, updated statistical relations for faulting and ground motions, and current computational seismic hazard methodologies that incorporate uncertainty into the analysis. The reevaluation does not require significant new geological field studies.

  9. Evaluation of factors affecting ice forces at selected bridges in South Dakota

    USGS Publications Warehouse

    Niehus, Colin A.

    2002-01-01

    During 1998-2002, the U.S. Geological Survey, in cooperation with the South Dakota Department of Transportation (SDDOT), conducted a study to evaluate factors affecting ice forces at selected bridges in South Dakota. The focus of this ice-force evaluation was on maximum ice thickness and ice-crushing strength, which are the most important variables in the SDDOT bridge-design equations for ice forces in South Dakota. Six sites, the James River at Huron, the James River near Scotland, the White River near Oacoma/Presho, the Grand River at Little Eagle, the Oahe Reservoir near Mobridge, and the Lake Francis Case at the Platte-Winner Bridge, were selected for collection of ice-thickness and ice-crushing-strength data. Ice thickness was measured at the six sites from February 1999 until April 2001. This period is representative of the climate extremes of record in South Dakota because it included both one of the warmest and one of the coldest winters on record. The 2000 and 2001 winters were the 8th warmest and 11th coldest winters, respectively, on record at Sioux Falls, South Dakota, which was used to represent the climate at all bridges in South Dakota. Ice thickness measured at the James River sites at Huron and Scotland during 1999-2001 ranged from 0.7 to 2.3 feet and 0 to 1.7 feet, respectively, and ice thickness measured at the White River near Oacoma/Presho site during 2000-01 ranged from 0.1 to 1.5 feet. At the Grand River at Little Eagle site, ice thickness was measured at 1.2 feet in 1999, ranged from 0.5 to 1.2 feet in 2000, and ranged from 0.2 to 1.4 feet in 2001. Ice thickness measured at the Oahe Reservoir near Mobridge site ranged from 1.7 to 1.8 feet in 1999, 0.9 to 1.2 feet in 2000, and 0 to 2.2 feet in 2001. At the Lake Francis Case at the Platte-Winner Bridge site, ice thickness ranged from 1.2 to 1.8 feet in 2001. Historical ice-thickness data measured by the U.S. Geological Survey (USGS) at eight selected streamflow-gaging stations in South Dakota were compiled for 1970-97. The gaging stations included the Grand River at Little Eagle, the White River near Oacoma, the James River near Scotland, the James River near Yankton, the Vermillion River near Wakonda, the Vermillion River near Vermillion, the Big Sioux River near Brookings, and the Big Sioux River near Dell Rapids. Three ice-thickness-estimation equations that potentially could be used for bridge design in South Dakota were selected and included the Accumulative Freezing Degree Day (AFDD), Incremental Accumulative Freezing Degree Day (IAFDD), and Simplified Energy Budget (SEB) equations. These three equations were evaluated by comparing study-collected and historical ice-thickness measurements to equation-estimated ice thicknesses. Input data required by the equations either were collected or compiled for the study or were obtained from the National Weather Service (NWS). An analysis of the data indicated that the AFDD equation best estimated ice thickness in South Dakota using available data sources with an average variation about the measured value of about 0.4 foot. Maximum potential ice thickness was estimated using the AFDD equation at 19 NWS stations located throughout South Dakota. The 1979 winter (the coldest winter on record at Sioux Falls) was the winter used to estimate the maximum potential ice thickness. The estimated maximum potential ice thicknesses generally are largest in northeastern South Dakota at about 3 feet and are smallest in southwestern and south-central South Dakota at about 2 feet. From 1999 to 2001, ice-crushing strength was measured at the same six sites where ice thickness was measured. Ice-crushing-strength measurements were done both in the middle of the winter and near spring breakup. The maximum ice-crushing strengths were measured in the mid- to late winter before the spring thaw. Measured ice-crushing strengths were much smaller near spring breakup. Ice-crushing strength measured at the six sites

  10. Real-time stop sign detection and distance estimation using a single camera

    NASA Astrophysics Data System (ADS)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  11. Reconstruction of Absorbed Doses to Fibroglandular Tissue of the Breast of Women undergoing Mammography (1960 to the Present)

    PubMed Central

    Thierry-Chef, Isabelle; Simon, Steven L.; Weinstock, Robert M.; Kwon, Deukwoo; Linet, Martha S.

    2013-01-01

    The assessment of potential benefits versus harms from mammographic examinations as described in the controversial breast cancer screening recommendations of the U.S. Preventive Task Force included limited consideration of absorbed dose to the fibroglandular tissue of the breast (glandular tissue dose), the tissue at risk for breast cancer. Epidemiological studies on cancer risks associated with diagnostic radiological examinations often lack accurate information on glandular tissue dose, and there is a clear need for better estimates of these doses. Our objective was to develop a quantitative summary of glandular tissue doses from mammography by considering sources of variation over time in key parameters including imaging protocols, x-ray target materials, voltage, filtration, incident air kerma, compressed breast thickness, and breast composition. We estimated the minimum, maximum, and mean values for glandular tissue dose for populations of exposed women within 5-year periods from 1960 to the present, with the minimum to maximum range likely including 90% to 95% of the entirety of the dose range from mammography in North America and Europe. Glandular tissue dose from a single view in mammography is presently about 2 mGy, about one-sixth the dose in the 1960s. The ratio of our estimates of maximum to minimum glandular tissue doses for average-size breasts was about 100 in the 1960s compared to a ratio of about 5 in recent years. Findings from our analysis provide quantitative information on glandular tissue doses from mammographic examinations which can be used in epidemiologic studies of breast cancer. PMID:21988547

  12. Development of magnitude scaling relationship for earthquake early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D.

    2011-12-01

    Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.

  13. Environmental consequences of postulated plutonium releases from Westinghouse PFDL, Cheswick, Pennsylvania, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, R.B.; Watson, E.C.

    1979-06-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated accidents due to earthquakes, tornadoes, high straight-line winds, and floods. Maximum plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays except Earthquake No. 4 and the 260-mph tornado. The most likely maximum residual plutonium contamination estimated to be deposited offsite following Earthquake No. 4, and themore » 200-mph and 260-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the other severe natural phenomena are below the EPA proposed guideline.« less

  14. Sustainable biochar to mitigate global climate change

    PubMed Central

    Woolf, Dominic; Amonette, James E.; Street-Perrott, F. Alayne; Lehmann, Johannes; Joseph, Stephen

    2010-01-01

    Production of biochar (the carbon (C)-rich solid formed by pyrolysis of biomass) and its storage in soils have been suggested as a means of abating climate change by sequestering carbon, while simultaneously providing energy and increasing crop yields. Substantial uncertainties exist, however, regarding the impact, capacity and sustainability of biochar at the global level. In this paper we estimate the maximum sustainable technical potential of biochar to mitigate climate change. Annual net emissions of carbon dioxide (CO2), methane and nitrous oxide could be reduced by a maximum of 1.8 Pg CO2-C equivalent (CO2-Ce) per year (12% of current anthropogenic CO2-Ce emissions; 1 Pg=1 Gt), and total net emissions over the course of a century by 130 Pg CO2-Ce, without endangering food security, habitat or soil conservation. Biochar has a larger climate-change mitigation potential than combustion of the same sustainably procured biomass for bioenergy, except when fertile soils are amended while coal is the fuel being offset. PMID:20975722

  15. The maximum evaporative potential of constant wear immersion suits influences the risk of excessive heat strain for helicopter aircrew

    PubMed Central

    2018-01-01

    The heat exchange properties of aircrew clothing including a Constant Wear Immersion Suit (CWIS), and the environmental conditions in which heat strain would impair operational performance, were investigated. The maximum evaporative potential (im/clo) of six clothing ensembles (three with a flight suit (FLY) and three with a CWIS) of varying undergarment layers were measured with a heated sweating manikin. Biophysical modelling estimated the environmental conditions in which body core temperature would elevate above 38.0°C during routine flight. The im/clo was reduced with additional undergarment layers, and was more restricted in CWIS compared to FLY ensembles. A significant linear relationship (r2 = 0.98, P<0.001) was observed between im/clo and the highest wet-bulb globe temperature in which the flight scenario could be completed without body core temperature exceeding 38.0°C. These findings provide a valuable tool for clothing manufacturers and mission planners for the development and selection of CWIS’s for aircrew. PMID:29723267

  16. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  17. Mapping the economic benefits to livestock keepers from intervening against bovine trypanosomosis in Eastern Africa.

    PubMed

    Shaw, A P M; Cecchi, G; Wint, G R W; Mattioli, R C; Robinson, T P

    2014-02-01

    Endemic animal diseases such as tsetse-transmitted trypanosomosis are a constant drain on the financial resources of African livestock keepers and on the productivity of their livestock. Knowing where the potential benefits of removing animal trypanosomosis are distributed geographically would provide crucial evidence for prioritising and targeting cost-effective interventions as well as a powerful tool for advocacy. To this end, a study was conducted on six tsetse-infested countries in Eastern Africa: Ethiopia, Kenya, Somalia, South Sudan, Sudan and Uganda. First, a map of cattle production systems was generated, with particular attention to the presence of draught and dairy animals. Second, herd models for each production system were developed for two scenarios: with or without trypanosomosis. The herd models were based on publications and reports on cattle productivity (fertility, mortality, yields, sales), from which the income from, and growth of cattle populations were estimated over a twenty-year period. Third, a step-wise spatial expansion model was used to estimate how cattle populations might migrate to new areas when maximum stocking rates are exceeded. Last, differences in income between the two scenarios were mapped, thus providing a measure of the maximum benefits that could be obtained from intervening against tsetse and trypanosomosis. For this information to be readily mappable, benefits were calculated per bovine and converted to US$ per square kilometre. Results indicate that the potential benefits from dealing with trypanosomosis in Eastern Africa are both very high and geographically highly variable. The estimated total maximum benefit to livestock keepers for the whole of the study area amounts to nearly US$ 2.5 billion, discounted at 10% over twenty years--an average of approximately US$ 3300 per square kilometre of tsetse-infested area--but with great regional variation from less than US$ 500 per square kilometre to well over US$ 10,000. The greatest potential benefits accrue to Ethiopia, because of its very high livestock densities and the importance of animal traction, but also to parts of Kenya and Uganda. In general, the highest benefit levels occur on the fringes of the tsetse infestations. The implications of the models' assumptions and generalisations are discussed. Copyright © 2013 Food and Agriculture Organization of the United Nations. Published by Elsevier B.V. All rights reserved.

  18. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  19. Using remotely sensed imagery to estimate potential annual pollutant loads in river basins.

    PubMed

    He, Bin; Oki, Kazuo; Wang, Yi; Oki, Taikan

    2009-01-01

    Land cover changes around river basins have caused serious environmental degradation in global surface water areas, in which the direct monitoring and numerical modeling is inherently difficult. Prediction of pollutant loads is therefore crucial to river environmental management under the impact of climate change and intensified human activities. This research analyzed the relationship between land cover types estimated from NOAA Advanced Very High Resolution Radiometer (AVHRR) imagery and the potential annual pollutant loads of river basins in Japan. Then an empirical approach, which estimates annual pollutant loads directly from satellite imagery and hydrological data, was investigated. Six water quality indicators were examined, including total nitrogen (TN), total phosphorus (TP), suspended sediment (SS), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), and Dissolved Oxygen (DO). The pollutant loads of TN, TP, SS, BOD, COD, and DO were then estimated for 30 river basins in Japan. Results show that the proposed simulation technique can be used to predict the pollutant loads of river basins in Japan. These results may be useful in establishing total maximum annual pollutant loads and developing best management strategies for surface water pollution at river basin scale.

  20. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  1. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  2. Determination of the maximum-depth to potential field sources by a maximum structural index method

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  3. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  4. Seasonal variability of Internal tide energetics in the western Bay of Bengal

    NASA Astrophysics Data System (ADS)

    Mohanty, S.; Rao, A. D.

    2017-12-01

    The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, seamounts, etc. These waves are an important phenomena in the ocean due to their influence on the density structure and energy transfer into the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the western Bay of Bengal is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution observed data sets are available. The model is initially validated through the spectral estimate of density and the baroclinic velocities. From the estimate, it is found that its peak is associated with the semi-diurnal frequency at all the depths in both observations and model simulations for November-December and March-April. However in August, the estimate is found to be maximum near the inertial frequency at all available depths. EOF analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The phase speed, group speed and wavelength are found to be maximum for post-monsoon season compared to other two seasons. To understand the generation and propagation of internal tides over this region, barotropic-to-baroclinic M2 tidal energy conversion and energy flux are examined. The barotropic-to-baroclinic conversion occurs intensively along the shelf-slope regions and propagate towards the coast. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m2) in northern BoB and minimum in August (14kg/m2). The detailed energy budget calculation are made for all the seasons and results are analysed.

  5. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  6. Mathematical model to interpret localized reflectance spectra measured in the presence of a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime J.; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2016-06-01

    Quantification of multiple fluorescence markers during neurosurgery has the potential to provide complementary contrast mechanisms between normal and malignant tissues, and one potential combination involves fluorescein sodium (FS) and aminolevulinic acid-induced protoporphyrin IX (PpIX). We focus on the interpretation of reflectance spectra containing contributions from elastically scattered (reflected) photons as well as fluorescence emissions from a strong fluorophore (i.e., FS). A model-based approach to extract μa and μs‧ in the presence of FS emission is validated in optical phantoms constructed with Intralipid (1% to 2% lipid) and whole blood (1% to 3% volume fraction), over a wide range of FS concentrations (0 to 1000 μg/ml). The results show that modeling reflectance as a combination of elastically scattered light and attenuation-corrected FS-based emission yielded more accurate tissue parameter estimates when compared with a nonmodified reflectance model, with reduced maximum errors for blood volume (22% versus 90%), microvascular saturation (21% versus 100%), and μs‧ (13% versus 207%). Additionally, quantitative PpIX fluorescence sampled in the same phantom as FS showed significant differences depending on the reflectance model used to estimate optical properties (i.e., maximum error 29% versus 86%). These data represent a first step toward using quantitative optical spectroscopy to guide surgeries through simultaneous assessment of FS and PpIX.

  7. Methods for fitting a parametric probability distribution to most probable number data.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2012-07-02

    Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method. Published by Elsevier B.V.

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Relevance of the correlation between precipitation and the 0 °C isothermal altitude for extreme flood estimation

    NASA Astrophysics Data System (ADS)

    Zeimetz, Fraenz; Schaefli, Bettina; Artigue, Guillaume; García Hernández, Javier; Schleiss, Anton J.

    2017-08-01

    Extreme floods are commonly estimated with the help of design storms and hydrological models. In this paper, we propose a new method to take into account the relationship between precipitation intensity (P) and air temperature (T) to account for potential snow accumulation and melt processes during the elaboration of design storms. The proposed method is based on a detailed analysis of this P-T relationship in the Swiss Alps. The region, no upper precipitation intensity limit is detectable for increasing temperature. However, a relationship between the highest measured temperature before a precipitation event and the duration of the subsequent event could be identified. An explanation for this relationship is proposed here based on the temperature gradient measured before the precipitation events. The relevance of these results is discussed for an example of Probable Maximum Precipitation-Probable Maximum Flood (PMP-PMF) estimation for the high mountainous Mattmark dam catchment in the Swiss Alps. The proposed method to associate a critical air temperature to a PMP is easily transposable to similar alpine settings where meteorological soundings as well as ground temperature and precipitation measurements are available. In the future, the analyses presented here might be further refined by distinguishing between precipitation event types (frontal versus orographic).

  10. The Relationship Between Relative Fundamental Frequency and a Kinematic Estimate of Laryngeal Stiffness in Healthy Adults

    PubMed Central

    Heller Murray, Elizabeth S.; Lien, Yu-An S.; Stepp, Cara E.

    2016-01-01

    Purpose This study examined the relationship between the acoustic measure relative fundamental frequency (RFF) and a kinematic estimate of laryngeal stiffness. Method Twelve healthy adults (mean age = 22.7 years, SD = 4.4; 10 women, 2 men) produced repetitions of /ifi/ while varying their vocal effort during simultaneous acoustic and video nasendoscopic recordings. RFF was determined from the last 10 voicing cycles before the voiceless obstruent (RFF offset) and the first 10 cycles of revoicing (RFF onset). A kinematic stiffness ratio was calculated for the vocal fold adductory gesture during revoicing by normalizing the maximum angular velocity by the maximum glottic angle during the voiceless obstruent. Results A linear mixed effect model indicated that RFF offset and onset were significant predictors of the kinematic stiffness ratios. The model accounted for 52% of the variance in the kinematic data. Individual relationships between RFF and kinematic stiffness ratios varied across participants, with at least moderate negative correlations in 83% of participants for RFF offset but only 40% of participants for RFF onset. Conclusions RFF significantly predicted kinematic estimates of laryngeal stiffness in healthy speakers and has the potential to be a useful clinical indicator of laryngeal tension. Further research is needed in individuals with voice disorders. PMID:27936279

  11. 40 CFR 75.72 - Determination of NOX mass emissions for common stack and multiple stack configurations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the affected units as the difference between NOX mass emissions measured in the common stack and NOX... emissions using the maximum potential NOX emission rate, the maximum potential flow rate, and either the maximum potential CO2 concentration or the minimum potential O2 concentration (as applicable). The maximum...

  12. Functional anatomy and muscle moment arms of the pelvic limb of an elite sprinting athlete: the racing greyhound (Canis familiaris)

    PubMed Central

    Williams, S B; Wilson, A M; Rhodes, L; Andrews, J; Payne, R C

    2008-01-01

    We provide quantitative anatomical data on the muscle–tendon architecture and geometry of the pelvic limb of an elite sprint athlete, the racing greyhound. Specifically, muscle masses, muscle lengths, fascicle lengths, pennation angles and muscle moment arms were measured. Maximum isometric force and power of muscles, the maximum muscle torque at joints and tendon stress and strain were estimated. We compare data with that published for a generalized breed of canid, and other cursorial mammals such as the horse and hare. The pelvic limb of the racing greyhound had a relatively large volume of hip extensor muscle, which is likely to be required for power production. Per unit body mass, some pelvic limb muscles were relatively larger than those in less specialized canines, and many hip extensor muscles had longer fascicle lengths. It was estimated that substantial extensor moments could be created about the tarsus and hip of the greyhound allowing high power output and potential for rapid acceleration. The racing greyhound hence possesses substantial specializations for enhanced sprint performance. PMID:18657259

  13. Functional anatomy and muscle moment arms of the pelvic limb of an elite sprinting athlete: the racing greyhound (Canis familiaris).

    PubMed

    Williams, S B; Wilson, A M; Rhodes, L; Andrews, J; Payne, R C

    2008-10-01

    We provide quantitative anatomical data on the muscle-tendon architecture and geometry of the pelvic limb of an elite sprint athlete, the racing greyhound. Specifically, muscle masses, muscle lengths, fascicle lengths, pennation angles and muscle moment arms were measured. Maximum isometric force and power of muscles, the maximum muscle torque at joints and tendon stress and strain were estimated. We compare data with that published for a generalized breed of canid, and other cursorial mammals such as the horse and hare. The pelvic limb of the racing greyhound had a relatively large volume of hip extensor muscle, which is likely to be required for power production. Per unit body mass, some pelvic limb muscles were relatively larger than those in less specialized canines, and many hip extensor muscles had longer fascicle lengths. It was estimated that substantial extensor moments could be created about the tarsus and hip of the greyhound allowing high power output and potential for rapid acceleration. The racing greyhound hence possesses substantial specializations for enhanced sprint performance.

  14. Initial dynamic load estimates during configuration design

    NASA Technical Reports Server (NTRS)

    Schiff, Daniel

    1987-01-01

    This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.

  15. Best estimate of luminal cross-sectional area of coronary arteries from angiograms

    NASA Technical Reports Server (NTRS)

    Lee, P. L.; Selzer, R. H.

    1988-01-01

    We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.

  16. Browns Ferry Nuclear Plant radiological impact assessment report, January-June 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B.E.

    1988-01-01

    Potential doses to maximum individuals and the population around Browns Ferry are calcuated for each quarter. Measured plant releases for the reporting period are used to estimate these doses. Dispersion of radioactive effluents in the environment is estimated in accordance with the guidance provided and measuring during the period. Using dose calculation methodologies which are described in detail in the Browns Ferry Offsite Dose Calculation Manual, the doses are calculated and used to determine compliance with the dose limits contained in Browns Ferry's Operating License. In this report, the doses resulting from releases are described and compared to quarterly andmore » annual limits established for Browns Ferry.« less

  17. Using river distance and existing hydrography data can improve the geostatistical estimation of fish tissue mercury at unsampled locations.

    PubMed

    Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L

    2011-09-15

    Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.

  18. Estimating the Rate of Occurrence of Renal Stones in Astronauts

    NASA Technical Reports Server (NTRS)

    Myers, J.; Goodenow, D.; Gokoglu, S.; Kassemi, M.

    2016-01-01

    Changes in urine chemistry, during and post flight, potentially increases the risk of renal stones in astronauts. Although much is known about the effects of space flight on urine chemistry, no inflight incidence of renal stones in US astronauts exists and the question "How much does this risk change with space flight?" remains difficult to accurately quantify. In this discussion, we tackle this question utilizing a combination of deterministic and probabilistic modeling that implements the physics behind free stone growth and agglomeration, speciation of urine chemistry and published observations of population renal stone incidences to estimate changes in the rate of renal stone presentation. The modeling process utilizes a Population Balance Equation based model developed in the companion IWS abstract by Kassemi et al. (2016) to evaluate the maximum growth and agglomeration potential from a specified set of urine chemistry values. Changes in renal stone occurrence rates are obtained from this model in a probabilistic simulation that interrogates the range of possible urine chemistries using Monte Carlo techniques. Subsequently, each randomly sampled urine chemistry undergoes speciation analysis using the well-established Joint Expert Speciation System (JESS) code to calculate critical values, such as ionic strength and relative supersaturation. The Kassemi model utilizes this information to predict the mean and maximum stone size. We close the assessment loop by using a transfer function that estimates the rate of stone formation from combining the relative supersaturation and both the mean and maximum free stone growth sizes. The transfer function is established by a simulation analysis which combines population stone formation rates and Poisson regression. Training this transfer function requires using the output of the aforementioned assessment steps with inputs from known non-stone-former and known stone-former urine chemistries. Established in a Monte Carlo system, the entire renal stone analysis model produces a probability distribution of the stone formation rate and an expected uncertainty in the estimate. The utility of this analysis will be demonstrated by showing the change in renal stone occurrence predicted by this method using urine chemistry distributions published in Whitson et al. 2009. A comparison to the model predictions to previous assessments of renal stone risk will be used to illustrate initial validation of the model.

  19. Comparing performance on the MNREAD iPad application with the MNREAD acuity chart.

    PubMed

    Calabrèse, Aurélie; To, Long; He, Yingchen; Berkholtz, Elizabeth; Rafian, Paymon; Legge, Gordon E

    2018-01-01

    Our purpose was to compare reading performance measured with the MNREAD Acuity Chart and an iPad application (app) version of the same test for both normally sighted and low-vision participants. Our methods included 165 participants with normal vision and 43 participants with low vision tested on the standard printed MNREAD and on the iPad app version of the test. Maximum Reading Speed, Critical Print Size, Reading Acuity, and Reading Accessibility Index were compared using linear mixed-effects models to identify any potential differences in test performance between the printed chart and the iPad app. Our results showed the following: For normal vision, chart and iPad yield similar estimates of Critical Print Size and Reading Acuity. The iPad provides significantly slower estimates of Maximum Reading Speed than the chart, with a greater difference for faster readers. The difference was on average 3% at 100 words per minute (wpm), 6% at 150 wpm, 9% at 200 wpm, and 12% at 250 wpm. For low vision, Maximum Reading Speed, Reading Accessibility Index, and Critical Print Size are equivalent on the iPad and chart. Only the Reading Acuity is significantly smaller (I. E., better) when measured on the digital version of the test, but by only 0.03 logMAR (p = 0.013). Our conclusions were that, overall, MNREAD parameters measured with the printed chart and the iPad app are very similar. The difference found in Maximum Reading Speed for the normally sighted participants can be explained by differences in the method for timing the reading trials.

  20. Comparing performance on the MNREAD iPad application with the MNREAD acuity chart

    PubMed Central

    Calabrèse, Aurélie; To, Long; He, Yingchen; Berkholtz, Elizabeth; Rafian, Paymon; Legge, Gordon E.

    2018-01-01

    Our purpose was to compare reading performance measured with the MNREAD Acuity Chart and an iPad application (app) version of the same test for both normally sighted and low-vision participants. Our methods included 165 participants with normal vision and 43 participants with low vision tested on the standard printed MNREAD and on the iPad app version of the test. Maximum Reading Speed, Critical Print Size, Reading Acuity, and Reading Accessibility Index were compared using linear mixed-effects models to identify any potential differences in test performance between the printed chart and the iPad app. Our results showed the following: For normal vision, chart and iPad yield similar estimates of Critical Print Size and Reading Acuity. The iPad provides significantly slower estimates of Maximum Reading Speed than the chart, with a greater difference for faster readers. The difference was on average 3% at 100 words per minute (wpm), 6% at 150 wpm, 9% at 200 wpm, and 12% at 250 wpm. For low vision, Maximum Reading Speed, Reading Accessibility Index, and Critical Print Size are equivalent on the iPad and chart. Only the Reading Acuity is significantly smaller (I. E., better) when measured on the digital version of the test, but by only 0.03 logMAR (p = 0.013). Our conclusions were that, overall, MNREAD parameters measured with the printed chart and the iPad app are very similar. The difference found in Maximum Reading Speed for the normally sighted participants can be explained by differences in the method for timing the reading trials. PMID:29351351

  1. Using a genetic mixture model to study phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length‐adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large‐mesh gill‐net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size‐selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock‐specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error. The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length‐adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted to study any observable trait or condition that may vary among populations.

  2. Estimating Last Glacial Maximum Ice Thickness Using Porosity and Depth Relationships: Examples from AND-1B and AND-2A Cores, McMurdo Sound, Antarctica

    NASA Astrophysics Data System (ADS)

    Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.

    2009-12-01

    We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. L. Abbott; K. N. Keck; R. E. Schindler

    This screening level risk assessment evaluates potential adverse human health and ecological impacts resulting from continued operations of the calciner at the New Waste Calcining Facility (NWCF) at the Idaho Nuclear Technology and Engineering Center (INTEC), Idaho National Engineering and Environmental Laboratory (INEEL). The assessment was conducted in accordance with the Environmental Protection Agency (EPA) report, Guidance for Performing Screening Level Risk Analyses at Combustion Facilities Burning Hazardous Waste. This screening guidance is intended to give a conservative estimate of the potential risks to determine whether a more refined assessment is warranted. The NWCF uses a fluidized-bed combustor to solidifymore » (calcine) liquid radioactive mixed waste from the INTEC Tank Farm facility. Calciner off volatilized metal species, trace organic compounds, and low-levels of radionuclides. Conservative stack emission rates were calculated based on maximum waste solution feed samples, conservative assumptions for off gas partitioning of metals and organics, stack gas sampling for mercury, and conservative measurements of contaminant removal (decontamination factors) in the off gas treatment system. Stack emissions were modeled using the ISC3 air dispersion model to predict maximum particulate and vapor air concentrations and ground deposition rates. Results demonstrate that NWCF emissions calculated from best-available process knowledge would result in maximum onsite and offsite health and ecological impacts that are less then EPA-established criteria for operation of a combustion facility.« less

  4. Environmental consequences of postulate plutonium releases from Atomics International's Nuclear Materials Development Facility (NMDF), Santa Susana, California, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, J.D.; Watson, E.C.

    1982-02-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated plutonium releases caused by severe natural phenomena at the Atomics International's Nuclear Materials Development Facility (NMDF), in the Santa Susana site, California. The severe natural phenomena considered are earthquakes, tornadoes, and high straight-line winds. Plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely 50-year committed dose equivalents are given for the maximum-exposed individual and the population within a 50-mile radius of the plant. The maximum plutonium deposition values likely to occur offsite are alsomore » given. The most likely calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays. The most likely maximum residual plutonium contamination estimated to be deposited offsite following the earthquake, and the 150-mph and 170-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the 110-mph and the 130-mph tornadoes are below the EPA proposed guideline.« less

  5. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  6. Design study of steel V-Belt CVT for electric vehicles

    NASA Technical Reports Server (NTRS)

    Swain, J. C.; Klausing, T. A.; Wilcox, J. P.

    1980-01-01

    A continuously variable transmission (CVT) design layout was completed. The intended application was for coupling the flywheel to the driveline of a flywheel battery hybrid electric vehicle. The requirements were that the CVT accommodate flywheel speeds from 14,000 to 28,000 rpm and driveline speeds of 850 to 5000 rpm without slipping. Below 850 rpm a slipping clutch was used between the CVT and the driveline. The CVT was required to accommodate 330 ft-lb maximum torque and 100 hp maximum transient. The weighted average power was 22 hp, the maximum allowable full range shift time was 2 seconds and the required lift was 2600 hours. The resulting design utilized two steel V-belts in series to accommodate the required wide speed ratio. The size of the CVT, including the slipping clutch, was 20.6 inches long, 9.8 inches high and 13.8 inches wide. The estimated weight was 155 lb. An overall potential efficiency of 95 percent was projected for the average power condition.

  7. Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.

    2016-03-01

    A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.

  8. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  10. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  11. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  12. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  13. Maximum angular accuracy of pulsed laser radar in photocounting limit.

    PubMed

    Elbaum, M; Diament, P; King, M; Edelson, W

    1977-07-01

    To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.

  14. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  15. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  16. Fast switching thyristor applied in nanosecond-pulse high-voltage generator with closed transformer core.

    PubMed

    Li, Lee; Bao, Chaobing; Feng, Xibo; Liu, Yunlong; Fochan, Lin

    2013-02-01

    For a compact and reliable nanosecond-pulse high-voltage generator (NPHVG), the specification parameter selection and potential usage of fast controllable state-solid switches have an important bearing on the optimal design. The NPHVG with closed transformer core and fast switching thyristor (FST) was studied in this paper. According to the analysis of T-type circuit, the expressions for the voltages and currents of the primary and secondary windings on the transformer core of NPHVG were deduced, and the theoretical maximum analysis was performed. For NPHVG, the rise-rate of turn-on current (di/dt) across a FST may exceed its transient rating. Both mean and maximum values of di/dt were determined by the leakage inductances of the transformer, and the difference is 1.57 times. The optimum winding ratio is helpful to getting higher voltage output with lower specification FST, especially when the primary and secondary capacitances have been established. The oscillation period analysis can be effectively used to estimate the equivalent leakage inductance. When the core saturation effect was considered, the maximum di/dt estimated from the oscillating period of the primary current is more accurate than one from the oscillating period of the secondary voltage. Although increasing the leakage inductance of NPHVG can decrease di/dt across FST, it may reduce the output peak voltage of the NPHVG.

  17. Adsorption of Cd, Cu and Zn from aqueous solutions onto ferronickel slag under different potentially toxic metal combination.

    PubMed

    Park, Jong-Hwan; Kim, Seong-Heon; Kang, Se-Won; Kang, Byung-Hwa; Cho, Ju-Sik; Heo, Jong-Soo; Delaune, Ronald D; Ok, Yong Sik; Seo, Dong-Cheol

    2016-01-01

    Adsorption characteristics of potentially toxic metals in single- and multi-metal forms onto ferronickel slag were evaluated. Competitive sorption of metals by ferronickel slag has never been reported previously. The maximum adsorption capacities of toxic metals on ferronickel were in the order of Cd (10.2 mg g(-1)) > Cu (8.4 mg g(-1)) > Zn (4.4 mg g(-1)) in the single-metal adsorption isotherm and Cu (6.1 mg g(-1)) > Cd (2.3 mg g(-1)) > Zn (0.3 mg g(-1)) in the multi-metal adsorption isotherm. In comparison with single-metal adsorption isotherm, the reduction rates of maximum toxic metal adsorption capacity in the multi-metal adsorption isotherm were in the following order of Zn (93%) > Cd (78%) > Cu (27%). The Freundlich isotherm provides a slightly better fit than the Langmuir isotherm equation using ferronickel slag for potentially toxic metal adsorption. Multi-metal adsorption behaviors differed from single-metal adsorption due to competition, based on data obtained from Freundlich and Langmuir adsorption models and three-dimensional simulation. Especially, Cd and Zn were easily exchanged and substituted by Cu during multi-metal adsorption. Further competitive adsorption studies are necessary in order to accurately estimate adsorption capacity of ferronickel slag for potentially toxic metals in natural environments.

  18. The geographic distribution and economic value of climate change-related ozone health impacts in the United States in 2030.

    PubMed

    Fann, Neal; Nolte, Christopher G; Dolwick, Patrick; Spero, Tanya L; Brown, Amanda Curry; Phillips, Sharon; Anenberg, Susan

    2015-05-01

    In this United States-focused analysis we use outputs from two general circulation models (GCMs) driven by different greenhouse gas forcing scenarios as inputs to regional climate and chemical transport models to investigate potential changes in near-term U.S. air quality due to climate change. We conduct multiyear simulations to account for interannual variability and characterize the near-term influence of a changing climate on tropospheric ozone-related health impacts near the year 2030, which is a policy-relevant time frame that is subject to fewer uncertainties than other approaches employed in the literature. We adopt a 2030 emissions inventory that accounts for fully implementing anthropogenic emissions controls required by federal, state, and/or local policies, which is projected to strongly influence future ozone levels. We quantify a comprehensive suite of ozone-related mortality and morbidity impacts including emergency department visits, hospital admissions, acute respiratory symptoms, and lost school days, and estimate the economic value of these impacts. Both GCMs project average daily maximum temperature to increase by 1-4°C and 1-5 ppb increases in daily 8-hr maximum ozone at 2030, though each climate scenario produces ozone levels that vary greatly over space and time. We estimate tens to thousands of additional ozone-related premature deaths and illnesses per year for these two scenarios and calculate an economic burden of these health outcomes of hundreds of millions to tens of billions of U.S. dollars (2010$). Near-term changes to the climate have the potential to greatly affect ground-level ozone. Using a 2030 emission inventory with regional climate fields downscaled from two general circulation models, we project mean temperature increases of 1 to 4°C and climate-driven mean daily 8-hr maximum ozone increases of 1-5 ppb, though each climate scenario produces ozone levels that vary significantly over space and time. These increased ozone levels are estimated to result in tens to thousands of ozone-related premature deaths and illnesses per year and an economic burden of hundreds of millions to tens of billions of U.S. dollars (2010$).

  19. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  20. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for the 25-year-recurrence, 1-hour-duration rainfall. The largest postwildfire debris-flow volumes were estimated for Carbonate Creek and Milton Creek drainage basins, for both the 5- and 25-year-recurrence, 1-hour-duration rainfalls. Results from FLO-2D modeling of the 5-year and 25-year recurrence, 1-hour rainfalls indicate that the debris flows from the four drainage basins would reach or nearly reach the Crystal River. The model estimates maximum instantaneous depths of debris-flow material during postwildfire debris flows that exceeded 5 meters in some areas, but the differences in model results between the 5-year and 25-year recurrence, 1-hour rainfalls are small. Existing stream channels or topographic flow paths likely control the distribution of debris-flow material, and the difference in estimated debris-flow volume (about 25 percent more volume for the 25-year-recurrence, 1-hour-duration rainfall compared to the 5-year-recurrence, 1-hour-duration rainfall) does not seem to substantially affect the estimated spatial distribution of debris-flow material. Historically, the Marble area has experienced periodic debris flows in the absence of wildfire. This report estimates the probability and volume of debris flow and maximum instantaneous inundation area depths after hypothetical wildfire and rainfall. This postwildfire debris-flow report does not address the current (2010) prewildfire debris-flow hazards that exist near Marble.

  1. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  2. Simple agrometeorological models for estimating Guineagrass yield in Southeast Brazil.

    PubMed

    Pezzopane, José Ricardo Macedo; da Cruz, Pedro Gomes; Santos, Patricia Menezes; Bosi, Cristiam; de Araujo, Leandro Coelho

    2014-09-01

    The objective of this work was to develop and evaluate agrometeorological models to simulate the production of Guineagrass. For this purpose, we used forage yield from 54 growing periods between December 2004-January 2007 and April 2010-March 2012 in irrigated and non-irrigated pastures in São Carlos, São Paulo state, Brazil (latitude 21°57'42″ S, longitude 47°50'28″ W and altitude 860 m). Initially we performed linear regressions between the agrometeorological variables and the average dry matter accumulation rate for irrigated conditions. Then we determined the effect of soil water availability on the relative forage yield considering irrigated and non-irrigated pastures, by means of segmented linear regression among water balance and relative production variables (dry matter accumulation rates with and without irrigation). The models generated were evaluated with independent data related to 21 growing periods without irrigation in the same location, from eight growing periods in 2000 and 13 growing periods between December 2004-January 2007 and April 2010-March 2012. The results obtained show the satisfactory predictive capacity of the agrometeorological models under irrigated conditions based on univariate regression (mean temperature, minimum temperature and potential evapotranspiration or degreedays) or multivariate regression. The response of irrigation on production was well correlated with the climatological water balance variables (ratio between actual and potential evapotranspiration or between actual and maximum soil water storage). The models that performed best for estimating Guineagrass yield without irrigation were based on minimum temperature corrected by relative soil water storage, determined by the ratio between the actual soil water storage and the soil water holding capacity.irrigation in the same location, in 2000, 2010 and 2011. The results obtained show the satisfactory predictive capacity of the agrometeorological models under irrigated conditions based on univariate regression (mean temperature, potential evapotranspiration or degree-days) or multivariate regression. The response of irrigation on production was well correlated with the climatological water balance variables (ratio between actual and potential evapotranspiration or between actual and maximum soil water storage). The models that performed best for estimating Guineagrass yield without irrigation were based on degree-days corrected by the water deficit factor.

  3. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.

  4. Field estimates of body drag coefficient on the basis of dives in passerine birds.

    PubMed

    Hedenström, A; Liechti, F

    2001-03-01

    During forward flight, a bird's body generates drag that tends to decelerate its speed. By flapping its wings, or by converting potential energy into work if gliding, the bird produces both lift and thrust to balance the pull of gravity and drag. In flight mechanics, a dimensionless number, the body drag coefficient (C(D,par)), describes the magnitude of the drag caused by the body. The drag coefficient depends on the shape (or streamlining), the surface texture of the body and the Reynolds number. It is an important variable when using flight mechanical models to estimate the potential migratory flight range and characteristic flight speeds of birds. Previous wind tunnel measurements on dead, frozen bird bodies indicated that C(D,par) is 0.4 for small birds, while large birds should have lower values of approximately 0.2. More recent studies of a few birds flying in a wind tunnel suggested that previous values probably overestimated C(D,par). We measured maximum dive speeds of passerine birds during the spring migration across the western Mediterranean. When the birds reach their top speed, the pull of gravity should balance the drag of the body (and wings), giving us an opportunity to estimate C(D,par). Our results indicate that C(D,par) decreases with increasing Reynolds number within the range 0.17-0.77, with a mean C(D,par) of 0.37 for small passerines. A somewhat lower mean value could not be excluded because diving birds may control their speed below the theoretical maximum. Our measurements therefore support the notion that 0.4 (the 'old' default value) is a realistic value of C(D,par) for small passerines.

  5. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  6. A maximum power point prediction method for group control of photovoltaic water pumping systems based on parameter identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Su, J. H.; Guo, L.; Chen, J.

    2017-06-01

    This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.

  7. Potential for adaptation to climate change in a coral reef fish.

    PubMed

    Munday, Philip L; Donelson, Jennifer M; Domingos, Jose A

    2017-01-01

    Predicting the impacts of climate change requires knowledge of the potential to adapt to rising temperatures, which is unknown for most species. Adaptive potential may be especially important in tropical species that have narrow thermal ranges and live close to their thermal optimum. We used the animal model to estimate heritability, genotype by environment interactions and nongenetic maternal components of phenotypic variation in fitness-related traits in the coral reef damselfish, Acanthochromis polyacanthus. Offspring of wild-caught breeding pairs were reared for two generations at current-day and two elevated temperature treatments (+1.5 and +3.0 °C) consistent with climate change projections. Length, weight, body condition and metabolic traits (resting and maximum metabolic rate and net aerobic scope) were measured at four stages of juvenile development. Additive genetic variation was low for length and weight at 0 and 15 days posthatching (dph), but increased significantly at 30 dph. By contrast, nongenetic maternal effects on length, weight and body condition were high at 0 and 15 dph and became weaker at 30 dph. Metabolic traits, including net aerobic scope, exhibited high heritability at 90 dph. Furthermore, significant genotype x environment interactions indicated potential for adaptation of maximum metabolic rate and net aerobic scope at higher temperatures. Net aerobic scope was negatively correlated with weight, indicating that any adaptation of metabolic traits at higher temperatures could be accompanied by a reduction in body size. Finally, estimated breeding values for metabolic traits in F2 offspring were significantly affected by the parental rearing environment. Breeding values at higher temperatures were highest for transgenerationally acclimated fish, suggesting a possible role for epigenetic mechanisms in adaptive responses of metabolic traits. These results indicate a high potential for adaptation of aerobic scope to higher temperatures, which could enable reef fish populations to maintain their performance as ocean temperatures rise. © 2016 John Wiley & Sons Ltd.

  8. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  9. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  10. Yield Potential of Sugar Beet – Have We Hit the Ceiling?

    PubMed Central

    Hoffmann, Christa M.; Kenter, Christine

    2018-01-01

    The yield of sugar beet has continuously increased in the past decades. The question arises, whether this progress will continue in the future. A key factor for increasing yield potential of the crop is breeding progress. It was related to a shift in assimilate partitioning in the plant toward more storage carbohydrates (sucrose), whereas structural carbohydrates (leaves, cell wall compounds) unintendedly declined. The yield potential of sugar beet was estimated at 24 t sugar ha-1. For maximum yield, sufficient growth factors have to be available and the crop has to be able to fully utilize them. In sugar beet, limitations result from the lacking coincidence of maximum irradiation rates and full canopy cover, sink strength for carbon assimilation and high water demand, which cannot be met by rainfall alone. After harvest, sugar losses during storage occur. The paper discusses options for a further increase in yield potential, like autumn sowing of sugar beet, increasing sink strength and related constraints. It is prospected that yield increase by further widening the ratio of storage and structural carbohydrates will come to its natural limit as a certain cell wall stability is necessary. New challenges caused by climate change and by prolonged processing campaigns will occur. Thus breeding for improved pathogen resistance and storage properties will be even more important for successful sugar beet production than a further increase in yield potential itself. PMID:29599787

  11. Distribution of glyphosate and aminomethylphosphonic acid (AMPA) in agricultural topsoils of the European Union.

    PubMed

    Silva, Vera; Montanarella, Luca; Jones, Arwyn; Fernández-Ugalde, Oihane; Mol, Hans G J; Ritsema, Coen J; Geissen, Violette

    2018-04-15

    Approval for glyphosate-based herbicides in the European Union (EU) is under intense debate due to concern about their effects on the environment and human health. The occurrence of glyphosate residues in European water bodies is rather well documented whereas only few, fragmented and outdated information is available for European soils. We provide the first large-scale assessment of distribution (occurrence and concentrations) of glyphosate and its main metabolite aminomethylphosphonic acid (AMPA) in EU agricultural topsoils, and estimate their potential spreading by wind and water erosion. Glyphosate and/or AMPA were present in 45% of the topsoils collected, originating from eleven countries and six crop systems, with a maximum concentration of 2mgkg -1 . Several glyphosate and AMPA hotspots were identified across the EU. Soil loss rates (obtained from recently derived European maps) were used to estimate the potential export of glyphosate and AMPA by wind and water erosion. The estimated exports, result of a conceptually simple model, clearly indicate that particulate transport can contribute to human and environmental exposure to herbicide residues. Residue threshold values in soils are urgently needed to define potential risks for soil health and off site effects related to export by wind and water erosion. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Measured energy savings and performance of power-managed personal computers and monitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less

  13. Comparison of Seasonal Terrestrial Water Storage Variations from GRACE with Groundwater-level Measurements from the High Plains Aquifer (USA)

    NASA Technical Reports Server (NTRS)

    Strassberg, Gil; Scanlon, Bridget R.; Rodell, Matthew

    2007-01-01

    This study presents the first direct comparison of variations in seasonal GWS derived from GRACE TWS and simulated SM with GW-level measurements in a semiarid region. Results showed that variations in GWS and SM are the main sources controlling TWS changes over the High Plains, with negligible storage changes from surface water, snow, and biomass. Seasonal variations in GRACE TWS compare favorably with combined GWS from GW-level measurements (total 2,700 wells, average 1,050 GW-level measurements per season) and simulated SM from the Noah land surface model (R = 0.82, RMSD = 33 mm). Estimated uncertainty in seasonal GRACE-derived TWS is 8 mm, and estimated uncertainty in TWS changes is 11 mm. Estimated uncertainty in SM changes is 11 mm and combined uncertainty for TWS-SM changes is 15 mm. Seasonal TWS changes are detectable in 7 out of 9 monitored periods and maximum changes within a year (e.g. between winter and summer) are detectable in all 5 monitored periods. Grace-derived GWS calculated from TWS-SM generally agrees with estimates based on GW-level measurements (R = 0.58, RMSD = 33 mm). Seasonal TWS-SM changes are detectable in 5 out of the 9 monitored periods and maximum changes are detectable in all 5 monitored periods. Good correspondence between GRACE data and GW-level measurements from the intensively monitored High Plains aquifer validates the potential for using GRACE TWS and simulated SM to monitor GWS changes and aquifer depletion in semiarid regions subjected to intensive irrigation pumpage. This method can be used to monitor regions where large-scale aquifer depletion is ongoing, and in situ measurements are limited, such as the North China Plain or western India. This potential should be enhanced by future advances in GRACE processing, which will improve the spatial and temporal resolution of TWS changes, and will further increase applicability of GRACE data for monitoring GWS.

  14. Geophysical Assessment of Groundwater Potential: A Case Study from Mian Channu Area, Pakistan.

    PubMed

    Hasan, Muhammad; Shang, Yanjun; Akhter, Gulraiz; Jin, Weijun

    2017-11-17

    An integrated study using geophysical method in combination with pumping tests and geochemical method was carried out to delineate groundwater potential zones in Mian Channu area of Pakistan. Vertical electrical soundings (VES) using Schlumberger configuration with maximum current electrode spacing (AB/2 = 200 m) were conducted at 50 stations and 10 pumping tests at borehole sites were performed in close proximity to 10 of the VES stations. The aim of this study is to establish a correlation between the hydraulic parameters obtained from geophysical method and pumping tests so that the aquifer potential can be estimated from the geoelectrical surface measurements where no pumping tests exist. The aquifer parameters, namely, transmissivity and hydraulic conductivity were estimated from Dar Zarrouyk parameters by interpreting the layer parameters such as true resistivities and thicknesses. Geoelectrical succession of five-layer strata (i.e., topsoil, clay, clay sand, sand, and sand gravel) with sand as a dominant lithology was found in the study area. Physicochemical parameters interpreted by World Health Organization and Food and Agriculture Organization were well correlated with the aquifer parameters obtained by geoelectrical method and pumping tests. The aquifer potential zones identified by modeled resistivity, Dar Zarrouk parameters, pumped aquifer parameters, and physicochemical parameters reveal that sand and gravel sand with high values of transmissivity and hydraulic conductivity are highly promising water bearing layers in northwest of the study area. Strong correlation between estimated and pumped aquifer parameters suggest that, in case of sparse well data, geophysical technique is useful to estimate the hydraulic potential of the aquifer with varying lithology. © 2017, National Ground Water Association.

  15. Flood frequency estimates and documented and potential extreme peak discharges in Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.; McCabe, Lan P.

    2001-01-01

    Knowledge of the magnitude and frequency of floods is required for the safe and economical design of highway bridges, culverts, dams, levees, and other structures on or near streams; and for flood plain management programs. Flood frequency estimates for gaged streamflow sites were updated, documented extreme peak discharges for gaged and miscellaneous measurement sites were tabulated, and potential extreme peak discharges for Oklahoma streamflow sites were estimated. Potential extreme peak discharges, derived from the relation between documented extreme peak discharges and contributing drainage areas, can provide valuable information concerning the maximum peak discharge that could be expected at a stream site. Potential extreme peak discharge is useful in conjunction with flood frequency analysis to give the best evaluation of flood risk at a site. Peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years were estimated for 352 gaged streamflow sites. Data through 1999 water year were used from streamflow-gaging stations with at least 8 years of record within Oklahoma or about 25 kilometers into the bordering states of Arkansas, Kansas, Missouri, New Mexico, and Texas. These sites were in unregulated basins, and basins affected by regulation, urbanization, and irrigation. Documented extreme peak discharges and associated data were compiled for 514 sites in and near Oklahoma, 352 with streamflow-gaging stations and 162 at miscellaneous measurements sites or streamflow-gaging stations with short record, with a total of 671 measurements.The sites are fairly well distributed statewide, however many streams, large and small, have never been monitored. Potential extreme peak-discharge curves were developed for streamflow sites in hydrologic regions of the state based on documented extreme peak discharges and the contributing drainage areas. Two hydrologic regions, east and west, were defined using 98 degrees 15 minutes longitude as the dividing line.

  16. Shallow aquifer storage and recovery (SASR): Initial findings from the Willamette Basin, Oregon

    NASA Astrophysics Data System (ADS)

    Neumann, P.; Haggerty, R.

    2012-12-01

    A novel mode of shallow aquifer management could increase the volumetric potential and distribution of groundwater storage. We refer to this mode as shallow aquifer storage and recovery (SASR) and gauge its potential as a freshwater storage tool. By this mode, water is stored in hydraulically connected aquifers with minimal impact to surface water resources. Basin-scale numerical modeling provides a linkage between storage efficiency and hydrogeological parameters, which in turn guides rulemaking for how and where water can be stored. Increased understanding of regional groundwater-surface water interactions is vital to effective SASR implementation. In this study we (1) use a calibrated model of the central Willamette Basin (CWB), Oregon to quantify SASR storage efficiency at 30 locations; (2) estimate SASR volumetric storage potential throughout the CWB based on these results and pertinent hydrogeological parameters; and (3) introduce a methodology for management of SASR by such parameters. Of 3 shallow, sedimentary aquifers in the CWB, we find the moderately conductive, semi-confined, middle sedimentary unit (MSU) to be most efficient for SASR. We estimate that users overlying 80% of the area in this aquifer could store injected water with greater than 80% efficiency, and find efficiencies of up to 95%. As a function of local production well yields, we estimate a maximum annual volumetric storage potential of 30 million m3 using SASR in the MSU. This volume constitutes roughly 9% of the current estimated summer pumpage in the Willamette basin at large. The dimensionless quantity lag #—calculated using modeled specific capacity, distance to nearest in-layer stream boundary, and injection duration—exhibits relatively high correlation to SASR storage efficiency at potential locations in the CWB. This correlation suggests that basic field measurements could guide SASR as an efficient shallow aquifer storage tool.

  17. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  18. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  19. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  20. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  1. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  2. Application of Fuzzy Logic in Oral Cancer Risk Assessment

    PubMed Central

    SCROBOTĂ, Ioana; BĂCIUȚ, Grigore; FILIP, Adriana Gabriela; TODOR, Bianca; BLAGA, Florin; BĂCIUȚ, Mihaela Felicia

    2017-01-01

    Background: The mapping of the malignization mechanism is still incomplete, but oxidative stress is strongly correlated to carcinogenesis. In our research, using fuzzy logic, we aimed to estimate the oxidative stress related-cancerization risk of the oral potentially malignant disorders. Methods: Serum from 16 patients diagnosed (clinical and histopathological) with oral potentially malignant disorders (Dept. of Cranio-Maxillofacial Surgery and Radiology, ”Iuliu Hațieganu” University of Medicine and Pharmacy, Cluj Napoca, Romania) was processed fluorometric for malondialdehyde and proton donors assays (Dept. of Physiology,”Iuliu Hațieganu” University of Medicine and Pharmacy, Cluj-Napoca, Romania). The values were used as inputs, they were associated linguistic terms using MIN-MAX method and 25 IF-THEN inference rules were generated to estimate the output value, the cancerization risk appreciated on a scale from 1 to 10 - IF malondialdehyde is very high and donors protons are very low THEN the cancer risk is reaching the maximum value (Dept. of Industrial Engineering, Faculty of Managerial and Technological Engineering, University of Oradea, Oradea, Romania) (2012–2014). Results: We estimated the cancerization risk of the oral potentially malignant disorders by implementing the multi-criteria decision support system based on serum malondialdehyde and proton donors’ values. The risk was estimated as a concrete numerical value on a scale from 1 to 10 depending on the input numerical/linguistic value. Conclusion: The multi-criteria decision support system proposed by us, integrated into a more complex computerized decision support system, could be used as an important aid in oral cancer screening and establish future medical decision in oral potentially malignant disorders. PMID:28560191

  3. Application of Fuzzy Logic in Oral Cancer Risk Assessment.

    PubMed

    Scrobotă, Ioana; Băciuț, Grigore; Filip, Adriana Gabriela; Todor, Bianca; Blaga, Florin; Băciuț, Mihaela Felicia

    2017-05-01

    The mapping of the malignization mechanism is still incomplete, but oxidative stress is strongly correlated to carcinogenesis. In our research, using fuzzy logic, we aimed to estimate the oxidative stress related-cancerization risk of the oral potentially malignant disorders. Serum from 16 patients diagnosed (clinical and histopathological) with oral potentially malignant disorders (Dept. of Cranio-Maxillofacial Surgery and Radiology, "Iuliu Hațieganu" University of Medicine and Pharmacy, Cluj Napoca, Romania) was processed fluorometric for malondialdehyde and proton donors assays (Dept. of Physiology,"Iuliu Hațieganu" University of Medicine and Pharmacy, Cluj-Napoca, Romania). The values were used as inputs, they were associated linguistic terms using MIN-MAX method and 25 IF-THEN inference rules were generated to estimate the output value, the cancerization risk appreciated on a scale from 1 to 10 - IF malondialdehyde is very high and donors protons are very low THEN the cancer risk is reaching the maximum value (Dept. of Industrial Engineering, Faculty of Managerial and Technological Engineering, University of Oradea, Oradea, Romania) (2012-2014). We estimated the cancerization risk of the oral potentially malignant disorders by implementing the multi-criteria decision support system based on serum malondialdehyde and proton donors' values. The risk was estimated as a concrete numerical value on a scale from 1 to 10 depending on the input numerical/linguistic value. The multi-criteria decision support system proposed by us, integrated into a more complex computerized decision support system, could be used as an important aid in oral cancer screening and establish future medical decision in oral potentially malignant disorders.

  4. Atmospheric CO2 sequestration in iron and steel slag: Consett, Co. Durham, UK.

    PubMed

    Mayes, William Matthew; Riley, Alex L; Gomes, Helena I; Brabham, Peter; Hamlyn, Joanna; Pullin, Huw; Renforth, Phil

    2018-06-12

    Carbonate formation in waste from the steel industry could constitute a non-trivial proportion of global requirements to remove carbon dioxide from the atmosphere at potentially low cost. To constrain this potential, we examined atmospheric carbon dioxide sequestration in a >20 million tonne legacy slag deposit in northern England, UK. Carbonates formed from the drainage water of the heap had stable carbon and oxygen isotopes between -12 and -25 ‰ and -5 and -18 ‰ for δ13C and δ18O respectively, suggesting atmospheric carbon dioxide sequestration in high pH solutions. From analysis of solution saturation state, we estimate that between 280 and 2,900 tCO2 have precipitated from the drainage waters. However, by combining a thirty-seven-year dataset of the drainage water chemistry with geospatial analysis, we estimate that <1 % of the maximum carbon capture potential of the deposit may have been realised. This implies that uncontrolled deposition of slag is insufficient to maximise carbon sequestration, and there may be considerable quantities of unreacted legacy deposits available for atmospheric carbon sequestration.

  5. Determination of the combustion behavior for pure components and mixtures using a 20-liter sphere

    NASA Astrophysics Data System (ADS)

    Mashuga, Chad Victor

    1999-11-01

    The safest method to prevent fires and explosions of flammable vapors is to prevent the existence of flammable mixtures in the first place. This methodology requires detailed knowledge of the flammability region as a function of the fuel, oxygen, and nitrogen concentrations. A triangular flammability diagram is the most useful tool to display the flammability region, and to determine if a flammable mixture is present during plant operations. An automated apparatus for assessing the flammability region and for determining the potential effect of confined fuel-air explosions is described. Data derived from the apparatus included the limits of combustion, maximum combustion pressure, and the deflagration index, or KG. Accurate measurement of these parameters can be influenced by numerous experimental conditions, including igniter energy, humidity and gas composition. Gas humidity had a substantial effect on the deflagration index, but had little effect on the maximum combustion pressure. Small changes in gas compositions had a greater effect on the deflagration index than the maximum combustion pressure. Both the deflagration indices and the maximum combustion pressure proved insensitive to the range of igniter energies examined. Estimation of flammability limits using a calculated adiabatic flame temperature (CAFT) method is demonstrated. The CAFT model is compared with the extensive experimental data from this work for methane, ethylene and a 50/50 mixture of methane and ethylene. The CAFT model compares well to methane and ethylene throughout the flammability zone when using a 1200K threshold temperature. Deviations between the method and the experimental data occurs in the fuel rich region. For the 50/50 fuel mixture the CAFT deviates only in the fuel rich region---the inclusion of carbonaceous soot as one of the equilibrium products improved the fit. Determination of burning velocities from a spherical flame model utilizing the extensive pressure---time data was also completed. The burning velocities determined compare well to other investigators using this method. The data collected for the methane/ethylene mixture was used to evaluate mixing rules for the flammability limits, maximum combustion pressure, deflagration index, and burning velocity. These rules attempt to predict the behavior of fuel mixtures from pure component data. Le Chatelier's law and averaging both work well for predicting the flammability boundary in the fuel lean region and for mixtures of inerted fuel and air. Both methods underestimate the flammability boundary in the fuel rich region. For a mixture of methane and ethylene, we were unable to identify mixing rules for estimating the maximum combustion pressure and the burning velocity from pure component data. Averaging the deflagration indices for fuel air mixtures did provide a adequate estimation of the mixture behavior. Le Chatelier's method overestimated the maximum deflagration index in air but provided a satisfactory estimation in the extreme fuel lean and rich regions.

  6. Statistical field estimators for multiscale simulations.

    PubMed

    Eapen, Jacob; Li, Ju; Yip, Sidney

    2005-11-01

    We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.

  7. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  8. Assessment of potential biomass energy production in China towards 2030 and 2050

    NASA Astrophysics Data System (ADS)

    Zhao, Guangling

    2018-01-01

    The objective of this paper is to provide a more detailed picture of potential biomass energy production in the Chinese energy system towards 2030 and 2050. Biomass for bioenergy feedstocks comes from five sources, which are agricultural crop residues, forest residues and industrial wood waste, energy crops and woody crops, animal manure, and municipal solid waste. The potential biomass production is predicted based on the resource availability. In the process of identifying biomass resources production, assumptions are made regarding arable land, marginal land, crops yields, forest growth rate, and meat consumption and waste production. Four scenarios were designed to describe the potential biomass energy production to elaborate the role of biomass energy in the Chinese energy system in 2030. The assessment shows that under certain restrictions on land availability, the maximum potential biomass energy productions are estimated to be 18,833 and 24,901 PJ in 2030 and 2050.

  9. Estimation of Maximum Ground Motions in the Form of ShakeMaps and Assessment of Potential Human Fatalities from Scenario Earthquakes on the Chishan Active Fault in southern Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, Kun Sung; Huang, Hsiang Chi; Shen, Jia Rong

    2017-04-01

    Historically, there were many damaging earthquakes in southern Taiwan during the last century. Some of these earthquakes had resulted in heavy loss of human lives. Accordingly, assessment of potential seismic hazards has become increasingly important in southern Taiwan, including Kaohsiung, Tainan and northern Pingtung areas since the Central Geological Survey upgraded the Chishan active fault from suspected fault to Category I in 2010. In this study, we first estimate the maximum seismic ground motions in term of PGA, PGV and MMI by incorporating a site-effect term in attenuation relationships, aiming to show high seismic hazard areas in southern Taiwan. Furthermore, we will assess potential death tolls due to large future earthquakes occurring on Chishan active fault. As a result, from the maximum PGA ShakeMap for an Mw7.2 scenario earthquake on the Chishan active fault in southern Taiwan, we can see that areas with high PGA above 400 gals, are located in the northeastern, central and northern parts of southwestern Kaohsiung as well as the southern part of central Tainan. In addition, comparing the cities located in Tainan City at similar distances from the Chishan fault have relatively greater PGA and PGV than those in Kaohsiung City and Pingtung County. This is mainly due to large site response factors in Tainan. On the other hand, seismic hazard in term of PGA and PGV, respectively, show that they are not particular high in the areas near the Chishan fault. The main reason is that these areas are marked with low site response factors. Finally, the estimated fatalities in Kaohsiung City at 5230, 4285 and 2786, respectively, for Mw 7.2, 7.0 and 6.8 are higher than those estimated for Tainan City and Pingtung County. The main reason is high population density above 10000 persons per km2 are present in Fongshan, Zuoying, Sanmin, Cianjin, Sinsing, Yancheng, Lingya Districts and between 5,000 and 10,000 persons per km2 are present in Nanzih and Gushan Districts in Kaohsiung City. Another to pay special attention is Kaohsiung City has more than 540 thousands households whose residences over 50 years old, including bungalows and 2-3 stories houses. Many of them are still in use. Even more worry some is that in Kaohsiung many of these old structures are used for shops in the city center where population is highly concentrated. In case of earthquake, the consequences would be unthinkable. In light of results of this study, we urge both the municipal and central governments to take effective seismic hazard mitigation measures in the highly urbanized areas with large number of old buildings in southern Taiwan.

  10. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    USGS Publications Warehouse

    Flint, L.E.; Flint, A.L.

    2008-01-01

    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  11. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  12. Estimation of eye lens doses received by pediatric interventional cardiologists.

    PubMed

    Alejo, L; Koren, C; Ferrer, C; Corredoira, E; Serrada, A

    2015-09-01

    Maximum Hp(0.07) dose to the eye lens received in a year by the pediatric interventional cardiologists has been estimated. Optically stimulated luminescence dosimeters were placed on the eyes of an anthropomorphic phantom, whose position in the room simulates the most common irradiation conditions. Maximum workload was considered with data collected from procedures performed in the Hospital. None of the maximum values obtained exceed the dose limit of 20 mSv recommended by ICRP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. On the potential for CO2 mineral storage in continental flood basalts – PHREEQC batch- and 1D diffusion–reaction simulations

    PubMed Central

    2012-01-01

    Continental flood basalts (CFB) are considered as potential CO2 storage sites because of their high reactivity and abundant divalent metal ions that can potentially trap carbon for geological timescales. Moreover, laterally extensive CFB are found in many place in the world within reasonable distances from major CO2 point emission sources. Based on the mineral and glass composition of the Columbia River Basalt (CRB) we estimated the potential of CFB to store CO2 in secondary carbonates. We simulated the system using kinetic dependent dissolution of primary basalt-minerals (pyroxene, feldspar and glass) and the local equilibrium assumption for secondary phases (weathering products). The simulations were divided into closed-system batch simulations at a constant CO2 pressure of 100 bar with sensitivity studies of temperature and reactive surface area, an evaluation of the reactivity of H2O in scCO2, and finally 1D reactive diffusion simulations giving reactivity at CO2 pressures varying from 0 to 100 bar. Although the uncertainty in reactive surface area and corresponding reaction rates are large, we have estimated the potential for CO2 mineral storage and identified factors that control the maximum extent of carbonation. The simulations showed that formation of carbonates from basalt at 40 C may be limited to the formation of siderite and possibly FeMg carbonates. Calcium was largely consumed by zeolite and oxide instead of forming carbonates. At higher temperatures (60 – 100 C), magnesite is suggested to form together with siderite and ankerite. The maximum potential of CO2 stored as solid carbonates, if CO2 is supplied to the reactions unlimited, is shown to depend on the availability of pore space as the hydration and carbonation reactions increase the solid volume and clog the pore space. For systems such as in the scCO2 phase with limited amount of water, the total carbonation potential is limited by the amount of water present for hydration of basalt. PMID:22697910

  14. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  15. Crash avoidance potential of four passenger vehicle technologies.

    PubMed

    Jermakian, Jessica S

    2011-05-01

    The objective was to update estimates of maximum potential crash reductions in the United States associated with each of four crash avoidance technologies: side view assist, forward collision warning/mitigation, lane departure warning/prevention, and adaptive headlights. Compared with previous estimates (Farmer, 2008), estimates in this study attempted to account for known limitations of current systems. Crash records were extracted from the 2004-08 files of the National Automotive Sampling System General Estimates System (NASS GES) and the Fatality Analysis Reporting System (FARS). Crash descriptors such as vehicle damage location, road characteristics, time of day, and precrash maneuvers were reviewed to determine whether the information or action provided by each technology potentially could have prevented or mitigated the crash. Of the four crash avoidance technologies, forward collision warning/mitigation had the greatest potential for preventing crashes of any severity; the technology is potentially applicable to 1.2 million crashes in the United States each year, including 66,000 serious and moderate injury crashes and 879 fatal crashes. Lane departure warning/prevention systems appeared relevant to 179,000 crashes per year. Side view assist and adaptive headlights could prevent 395,000 and 142,000 crashes per year, respectively. Lane departure warning/prevention was relevant to the most fatal crashes, up to 7500 fatal crashes per year. A combination of all four current technologies potentially could prevent or mitigate (without double counting) up to 1,866,000 crashes each year, including 149,000 serious and moderate injury crashes and 10,238 fatal crashes. If forward collision warning were extended to detect objects, pedestrians, and bicyclists, it would be relevant to an additional 3868 unique fatal crashes. There is great potential effectiveness for vehicle-based crash avoidance systems. However, it is yet to be determined how drivers will interact with the systems. The actual effectiveness of these systems will not be known until sufficient real-world experience has been gained. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  17. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.

  18. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less

  19. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  20. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  1. Bayesian framework for modeling diffusion processes with nonlinear drift based on nonlinear and incomplete observations.

    PubMed

    Wu, Hao; Noé, Frank

    2011-03-01

    Diffusion processes are relevant for a variety of phenomena in the natural sciences, including diffusion of cells or biomolecules within cells, diffusion of molecules on a membrane or surface, and diffusion of a molecular conformation within a complex energy landscape. Many experimental tools exist now to track such diffusive motions in single cells or molecules, including high-resolution light microscopy, optical tweezers, fluorescence quenching, and Förster resonance energy transfer (FRET). Experimental observations are most often indirect and incomplete: (1) They do not directly reveal the potential or diffusion constants that govern the diffusion process, (2) they have limited time and space resolution, and (3) the highest-resolution experiments do not track the motion directly but rather probe it stochastically by recording single events, such as photons, whose properties depend on the state of the system under investigation. Here, we propose a general Bayesian framework to model diffusion processes with nonlinear drift based on incomplete observations as generated by various types of experiments. A maximum penalized likelihood estimator is given as well as a Gibbs sampling method that allows to estimate the trajectories that have caused the measurement, the nonlinear drift or potential function and the noise or diffusion matrices, as well as uncertainty estimates of these properties. The approach is illustrated on numerical simulations of FRET experiments where it is shown that trajectories, potentials, and diffusion constants can be efficiently and reliably estimated even in cases with little statistics or nonequilibrium measurement conditions.

  2. Site specific risk assessment of an energy-from-waste/thermal treatment facility in Durham Region, Ontario, Canada. Part B: Ecological risk assessment.

    PubMed

    Ollson, Christopher A; Whitfield Aslund, Melissa L; Knopper, Loren D; Dan, Tereza

    2014-01-01

    The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste (EFW) thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. In this paper we present the results of a comprehensive ecological risk assessment (ERA) for this planned facility, based on baseline sampling and site specific modeling to predict facility-related emissions, which was subsequently accepted by regulatory authorities. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and the maximum design capacity (400,000 tonnes per year). In general, calculated ecological hazard quotients (EHQs) and screening ratios (SRs) for receptors did not exceed the benchmark value (1.0). The only exceedances noted were generally due to existing baseline media concentrations, which did not differ from those expected for similar unimpacted sites in Ontario. This suggests that these exceedances reflect conservative assumptions applied in the risk assessment rather than actual potential risk. However, under predicted upset conditions at 400,000 tonnes per year (i.e., facility start-up, shutdown, and loss of air pollution control), a potential unacceptable risk was estimated for freshwater receptors with respect to benzo(g,h,i)perylene (SR=1.1), which could not be attributed to baseline conditions. Although this slight exceedance reflects a conservative worst-case scenario (upset conditions coinciding with worst-case meteorological conditions), further investigation of potential ecological risk should be performed if this facility is expanded to the maximum operating capacity in the future. © 2013.

  3. Observation of Snow cover glide on Sub-Alpine Coniferous Forests in Mount Zao, Northeastern Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, A.; Suzuki, K.

    2017-12-01

    This is the study to clarify the snow cover glide behavior in the sub-alpine coniferous forests on Mount Zao, Northeastern Japan, in the winter of 2014-2015. We installed the glide-meter which is sled type, and measured the glide motion on the slope of Abies mariesii forest and its surrounding slope. In addition, we observed the air temperature, snow depth, density of snow, and snow temperature to discuss relationship between weather conditions and glide occurrence. The snow cover of the 2014-15 winter started on November 13th and disappeared on April 21st. The maximum snow depth was 242 cm thick, it was recorded at February 1st. The snow cover glide in the surrounding slope was occurred first at February 10th, although maximum snow depth recorded on February 1st. The glide motion in the surrounding slope is continuing and its velocity was 0.4 cm per day. The glide in the surrounding slope stopped at March 16th. The cumulative amount of the glide was 21.1 cm. The snow cover glide in the A. mariesii forest was even later occurred first at February 21st. The glide motion of it was intermittent and extremely small. On sub-alpine zone of Mount Zao, snow cover glide intensity is estimated to be 289 kg/m2 on March when snow water equivalent is maximum. At same period, maximum snow cover glide intensity is estimated to be about 1000 kg/m2 at very steep slopes where the slope angle is about 35 degree. Although potential of snow cover glide is enough high, the snow cover glide is suppressed by stem of A. mariesii trees, in the sub-alpine coniferous forest.

  4. Regulation of water flux through tropical forest canopy trees: do universal rules apply?

    PubMed

    Meinzer, F C; Goldstein, G; Andrade, J L

    2001-01-01

    Tropical moist forests are notable for their richness in tree species. The presence of such a diverse tree flora presents potential problems for scaling up estimates of water use from individual trees to entire stands and for drawing generalizations about physiological regulation of water use in tropical trees. We measured sapwood area or sap flow, or both, in 27 co-occurring canopy species in a Panamanian forest to determine the extent to which relationships between tree size, sapwood area and sap flow were species-specific, or whether they were constrained by universal functional relationships between tree size, conducting xylem area, and water use. For the 24 species in which active xylem area was estimated over a range of size classes, diameter at breast height (DBH) accounted for 98% of the variation in sapwood area and 67% of the variation in sapwood depth when data for all species were combined. The DBH alone also accounted for > or = 90% of the variation in both maximum and total daily sap flux density in the outermost 2 cm of sapwood for all species taken together. Maximum sap flux density measured near the base of the tree occurred at about 1,400 h in the largest trees and 1,130 h in the smallest trees studied, and DBH accounted for 93% of the variation in the time of day at which maximum sap flow occurred. The shared relationship between tree size and time of maximum sap flow at the base of the tree suggests that a common relationship between diurnal stem water storage capacity and tree size existed. These results are consistent with a recent hypothesis that allometric scaling of plant vascular systems, and therefore water use, is universal.

  5. Maximizing the potential of cropping systems for nematode management.

    PubMed

    Noe, J P; Sasser, J N; Imbriani, J L

    1991-07-01

    Quantitative techniques were used to analyze and determine optimal potential profitability of 3-year rotations of cotton, Gossypium hirsutum cv. Coker 315, and soybean, Glycine max cv. Centennial, with increasing population densities of Hoplolaimus columbus. Data collected from naturally infested on-farm research plots were combined with economic information to construct a microcomputer spreadsheet analysis of the cropping system. Nonlinear mathematical functions were fitted to field data to represent damage functions and population dynamic curves. Maximum yield losses due to H. columbus were estimated to be 20% on cotton and 42% on soybean. Maximum at-harvest population densities were calculated to be 182/100 cm(3) soil for cotton and 149/100 cm(3) soil for soybean. Projected net incomes ranged from a $17.74/ha net loss for the soybean-cotton-soybean sequence to a net profit of $46.80/ha for the cotton-soybean-cotton sequence. The relative profitability of various rotations changed as nematode densities increased, indicating economic thresholds for recommending alternative crop sequences. The utility and power of quantitative optimization was demonstrated for comparisons of rotations under different economic assumptions and with other management alternatives.

  6. The theoretical limit to plant productivity.

    PubMed

    DeLucia, Evan H; Gomez-Casanovas, Nuria; Greenberg, Jonathan A; Hudiburg, Tara W; Kantola, Ilsa B; Long, Stephen P; Miller, Adam D; Ort, Donald R; Parton, William J

    2014-08-19

    Human population and economic growth are accelerating the demand for plant biomass to provide food, fuel, and fiber. The annual increment of biomass to meet these needs is quantified as net primary production (NPP). Here we show that an underlying assumption in some current models may lead to underestimates of the potential production from managed landscapes, particularly of bioenergy crops that have low nitrogen requirements. Using a simple light-use efficiency model and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass, we provide an upper-envelope NPP unconstrained by resource limitations. This theoretical maximum NPP approached 200 tC ha(-1) yr(-1) at point locations, roughly 2 orders of magnitude higher than most current managed or natural ecosystems. Recalculating the upper envelope estimate of NPP limited by available water reduced it by half or more in 91% of the land area globally. While the high conversion efficiencies observed in some extant plants indicate great potential to increase crop yields without changes to the basic mechanism of photosynthesis, particularly for crops with low nitrogen requirements, realizing such high yields will require improvements in water use efficiency.

  7. Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.

    PubMed

    Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús

    2008-10-01

    Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.

  8. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  9. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  11. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  12. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  13. Estimated cost savings of increased use of intravenous tissue plasminogen activator for acute ischemic stroke in Canada.

    PubMed

    Yip, Todd R; Demaerschalk, Bart M

    2007-06-01

    Intravenous tissue plasminogen activator (tPA) is an economically worthwhile but underused treatment option for acute ischemic stroke. We sought to identify the extent of tPA use in Canadian medical centers and the potential savings associated with increased use nationally and by province. We determined the nationwide annual incidence of ischemic stroke from the Canadian Institute of Health Information. The proportion of all ischemic stroke patients who received tPA was derived from published data. Economic analyses that report the expected annual cost savings of tPA were consulted. The analysis was conducted from the perspective of a universal health care system during 1 year. We estimated cost-savings with incrementally (eg, 2%, 4%, 6%, 8%, 10%, 15%, and 20%) increased use of tPA for acute ischemic stroke nationally and provincially. The current average national tPA utilization is 1.4%. For every increase of 2 percentage points in utilization, $757,204 (Canadian) could possibly be saved annually (95% CI maximum loss of $3,823,992 to a maximum savings of $2,201,252). With a 20% rate, >$7.5 million (Canadian) could be saved nationwide the first year. We estimate that even small increases in the proportion of all Canadian ischemic stroke patients receiving tPA could result in substantial realized savings for Canada's health care system.

  14. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  15. An index-flood model for deficit volumes assessment

    NASA Astrophysics Data System (ADS)

    Strnad, Filip; Moravec, Vojtěch; Hanel, Martin

    2017-04-01

    The estimation of return periods of hydrological extreme events and the evaluation of risks related to such events are objectives of many water resources studies. The aim of this study is to develop statistical model for drought indices using extreme value theory and index-flood method and to use this model for estimation of return levels of maximum deficit volumes of total runoff and baseflow. Deficit volumes for hundred and thirty-three catchments in the Czech Republic for the period 1901-2015 simulated by a hydrological model Bilan are considered. The characteristics of simulated deficit periods (severity, intensity and length) correspond well to those based on observed data. It is assumed that annual maximum deficit volumes in each catchment follow the generalized extreme value (GEV) distribution. The catchments are divided into three homogeneous regions considering long term mean runoff, potential evapotranspiration and base flow. In line with the index-flood method it is further assumed that the deficit volumes within each homogeneous region are identically distributed after scaling with a site-specific factor. The goodness-of-fit of the statistical model is assessed by Anderson-Darling statistics. For the estimation of critical values of the test several resampling strategies allowing for appropriate handling of years without drought are presented. Finally the significance of the trends in the deficit volumes is assessed by a likelihood ratio test.

  16. A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope

    DOE PAGES

    Baxter, E. J.; Keisler, R.; Dodelson, S.; ...

    2015-06-22

    Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We also develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error andmore » find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly 0.85σ in units of the statistical error bar, although this estimate should be viewed as an upper limit. Furthermore, we apply our maximum likelihood technique to 513 clusters selected via their Sunyaev–Zeldovich (SZ) signatures in SPT data, and rule out the null hypothesis of no lensing at 3.1σ. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: M 200,lens = 0.83 +0.38 -0.37 M 200,SZ (68% C.L., statistical error only).« less

  17. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  18. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  19. Inlet noise suppressor design method based upon the distribution of acoustic power with mode cutoff ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1976-01-01

    A liner design for noise suppressors with outer wall treatment such as in an engine inlet is presented which potentially circumvents the problems of resolution in modal measurement. The method is based on the fact that the modal optimum impedance and the maximum possible sound power attenuation at this optimum can be expressed as functions of cutoff ratio alone. Modes with similar cutoff ratios propagate similarly in the duct and in addition propagate similarly to the far field. Thus there is no need to determine the acoustic power carried by these modes individually, and they can be grouped together as one entity. With the optimum impedance and maximum attenuation specified as functions of cutoff ratio, the off-optimum liner performance can be estimated using an approximate attenuation equation.

  20. Estimating the global prevalence of transthyretin familial amyloid polyneuropathy

    PubMed Central

    Waddington‐Cruz, Márcia; Botteman, Marc F.; Carter, John A.; Chopra, Avijeet S.; Hopps, Markay; Stewart, Michelle; Fallet, Shari; Amass, Leslie

    2018-01-01

    ABSTRACT Introduction: This study sought to estimate the global prevalence of transthyretin familial amyloid polyneuropathy (ATTR‐FAP). Methods: Prevalence estimates and information supporting prevalence calculations was extracted from records yielded by reference‐database searches (2005–2016), conference proceedings, and nonpeer reviewed sources. Prevalence was calculated as prevalence rate multiplied by general population size, then extrapolated to countries without prevalence estimates but with reported cases. Results: Searches returned 3,006 records; 1,001 were fully assessed and 10 retained, yielding prevalence for 10 “core” countries, then extrapolated to 32 additional countries. ATTR‐FAP prevalence in core countries, extrapolated countries, and globally was 3,762 (range 3639–3884), 6424 (range, 1,887–34,584), and 10,186 (range, 5,526–38,468) persons, respectively. Discussion: The mid global prevalence estimate (10,186) approximates the maximum commonly accepted estimate (5,000–10,000). The upper limit (38,468) implies potentially higher prevalence. These estimates should be interpreted carefully because contributing evidence was heterogeneous and carried an overall moderate risk of bias. This highlights the requirement for increasing rare‐disease epidemiological assessment and clinician awareness. Muscle Nerve 57: 829–837, 2018 PMID:29211930

  1. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  2. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  3. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  4. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  5. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  6. Reproducibility of isopach data and estimates of dispersal and eruption volumes

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.

    2012-12-01

    Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.

  7. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  8. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    PubMed

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  9. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  10. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.

  11. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  12. Analyzing animal movements using Brownian bridges.

    PubMed

    Horne, Jon S; Garton, Edward O; Krone, Stephen M; Lewis, Jesse S

    2007-09-01

    By studying animal movements, researchers can gain insight into many of the ecological characteristics and processes important for understanding population-level dynamics. We developed a Brownian bridge movement model (BBMM) for estimating the expected movement path of an animal, using discrete location data obtained at relatively short time intervals. The BBMM is based on the properties of a conditional random walk between successive pairs of locations, dependent on the time between locations, the distance between locations, and the Brownian motion variance that is related to the animal's mobility. We describe two critical developments that enable widespread use of the BBMM, including a derivation of the model when location data are measured with error and a maximum likelihood approach for estimating the Brownian motion variance. After the BBMM is fitted to location data, an estimate of the animal's probability of occurrence can be generated for an area during the time of observation. To illustrate potential applications, we provide three examples: estimating animal home ranges, estimating animal migration routes, and evaluating the influence of fine-scale resource selection on animal movement patterns.

  13. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  14. Retention Severity in the Navy: A Composite Index.

    DTIC Science & Technology

    1983-06-01

    unfortunately, their estimates of optimum SRB award levels are applicable only to recruits with four year obligations ( 4YO ) and six year obligatiorn(6YO). A...of a maximum bonus award level of 6. Their estimates would put the maximum bonus level as high as 20 for 4YOs and 19 for 6YOs. However, the implica

  15. 78 FR 20109 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ...Meeting (i.e., webinar) training session conducted by CDC staff. We estimate the burden of this training to be a maximum of 2 hours. Respondents will only have to take this training one time. Assuming a maximum number of outbreaks of 1,400, the estimated burden for this training is 2,800 hours. The total...

  16. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  17. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  18. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  19. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodigas, Timothy J.; Hinz, Philip M.; Malhotra, Renu, E-mail: rodigas@as.arizona.edu

    Planets can affect debris disk structure by creating gaps, sharp edges, warps, and other potentially observable signatures. However, there is currently no simple way for observers to deduce a disk-shepherding planet's properties from the observed features of the disk. Here we present a single equation that relates a shepherding planet's maximum mass to the debris ring's observed width in scattered light, along with a procedure to estimate the planet's eccentricity and minimum semimajor axis. We accomplish this by performing dynamical N-body simulations of model systems containing a star, a single planet, and an exterior disk of parent bodies and dustmore » grains to determine the resulting debris disk properties over a wide range of input parameters. We find that the relationship between planet mass and debris disk width is linear, with increasing planet mass producing broader debris rings. We apply our methods to five imaged debris rings to constrain the putative planet masses and orbits in each system. Observers can use our empirically derived equation as a guide for future direct imaging searches for planets in debris disk systems. In the fortuitous case of an imaged planet orbiting interior to an imaged disk, the planet's maximum mass can be estimated independent of atmospheric models.« less

  1. Wetting and spreading behaviors of impinging microdroplets on textured surfaces

    NASA Astrophysics Data System (ADS)

    Kwon, Dae Hee; Lee, Sang Joon; CenterBiofluid and Biomimic Reseach Team

    2012-11-01

    Textured surfaces having an array of microscale pillars have been receiving large attention because of their potential uses for robust superhydrophobic and superoleophobic surfaces. In many practical applications, the textured surfaces usually accompany impinging small-scale droplets. To better understand the impinging phenomena on the textured surfaces, the wetting and spreading behaviors of water microdroplets are investigated experimentally. Microdroplets with diameter less than 50 μm are ejected from a piezoelectric printhead with varying Weber number. The final wetting state of an impinging droplet can be estimated by comparing the wetting pressures of the droplet and the capillary pressure of the textured surface. The wetting behaviors obtained experimentally are well agreed with the estimated results. In addition, the transition from bouncing to non-bouncing behaviors in the partially penetrated wetting state is observed. This transition implies the possibility of withdrawal of the penetrated liquid from the inter-pillar space. The maximum spreading factors (ratio of the maximum spreading diameter to the initial diameter) of the impinging droplets have close correlation with the texture area fraction of the surfaces. This work was supported by Creative Research Initiatives (Diagnosis of Biofluid Flow Phenomena and Biomimic Research) of MEST/KOSEF.

  2. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    PubMed

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  3. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  4. Demonstration of a conceptual model for using LiDAR to improve the estimation of floodwater mitigation potential of Prairie Pothole Region wetlands

    USGS Publications Warehouse

    Huang, S.; Young, Caitlin; Feng, M.; Heidemann, Hans Karl; Cushing, Matthew; Mushet, D.M.; Liu, S.

    2011-01-01

    Recent flood events in the Prairie Pothole Region of North America have stimulated interest in modeling water storage capacities of wetlands and their surrounding catchments to facilitate flood mitigation efforts. Accurate estimates of basin storage capacities have been hampered by a lack of high-resolution elevation data. In this paper, we developed a 0.5 m bare-earth model from Light Detection And Ranging (LiDAR) data and, in combination with National Wetlands Inventory data, delineated wetland catchments and their spilling points within a 196 km2 study area. We then calculated the maximum water storage capacity of individual basins and modeled the connectivity among these basins. When compared to field survey results, catchment and spilling point delineations from the LiDAR bare-earth model captured subtle landscape features very well. Of the 11 modeled spilling points, 10 matched field survey spilling points. The comparison between observed and modeled maximum water storage had an R2 of 0.87 with mean absolute error of 5564 m3. Since maximum water storage capacity of basins does not translate into floodwater regulation capability, we further developed a Basin Floodwater Regulation Index. Based upon this index, the absolute and relative water that could be held by wetlands over a landscape could be modeled. This conceptual model of floodwater downstream contribution was demonstrated with water level data from 17 May 2008.

  5. Concentration dependence of biotransformation in fish liver S9: Optimizing substrate concentrations to estimate hepatic clearance for bioaccumulation assessment.

    PubMed

    Lo, Justin C; Allard, Gayatri N; Otton, S Victoria; Campbell, David A; Gobas, Frank A P C

    2015-12-01

    In vitro bioassays to estimate biotransformation rate constants of contaminants in fish are currently being investigated to improve bioaccumulation assessments of hydrophobic contaminants. The present study investigates the relationship between chemical substrate concentration and in vitro biotransformation rate of 4 environmental contaminants (9-methylanthracene, pyrene, chrysene, and benzo[a]pyrene) in rainbow trout (Oncorhynchus mykiss) liver S9 fractions and methods to determine maximum first-order biotransformation rate constants. Substrate depletion experiments using a series of initial substrate concentrations showed that in vitro biotransformation rates exhibit strong concentration dependence, consistent with a Michaelis-Menten kinetic model. The results indicate that depletion rate constants measured at initial substrate concentrations of 1 μM (a current convention) could underestimate the in vitro biotransformation potential and may cause bioconcentration factors to be overestimated if in vitro biotransformation rates are used to assess bioconcentration factors in fish. Depletion rate constants measured using thin-film sorbent dosing experiments were not statistically different from the maximum depletion rate constants derived using a series of solvent delivery-based depletion experiments for 3 of the 4 test chemicals. Multiple solvent delivery-based depletion experiments at a range of initial concentrations are recommended for determining the concentration dependence of in vitro biotransformation rates in fish liver fractions, whereas a single sorbent phase dosing experiment may be able to provide reasonable approximations of maximum depletion rates of very hydrophobic substances. © 2015 SETAC.

  6. Transmission potential of Zika virus infection in the South Pacific.

    PubMed

    Nishiura, Hiroshi; Kinoshita, Ryo; Mizumoto, Kenji; Yasuda, Yohei; Nah, Kyeongah

    2016-04-01

    Zika virus has spread internationally through countries in the South Pacific and Americas. The present study aimed to estimate the basic reproduction number, R0, of Zika virus infection as a measurement of the transmission potential, reanalyzing past epidemic data from the South Pacific. Incidence data from two epidemics, one on Yap Island, Federal State of Micronesia in 2007 and the other in French Polynesia in 2013-2014, were reanalyzed. R0 of Zika virus infection was estimated from the early exponential growth rate of these two epidemics. The maximum likelihood estimate (MLE) of R0 for the Yap Island epidemic was in the order of 4.3-5.8 with broad uncertainty bounds due to the small sample size of confirmed and probable cases. The MLE of R0 for French Polynesia based on syndromic data ranged from 1.8 to 2.0 with narrow uncertainty bounds. The transmissibility of Zika virus infection appears to be comparable to those of dengue and chikungunya viruses. Considering that Aedes species are a shared vector, this finding indicates that Zika virus replication within the vector is perhaps comparable to dengue and chikungunya. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  8. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  9. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  10. An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.

    DTIC Science & Technology

    1981-06-29

    A.D-AI14# D" TENNEssEE UNIV KNOXVILLI DEPT OF PSYCHOLOGY F/6 12/1 AN ALTERNATIVE STIMATOR-FOR THE MAXIMUM LIKELIHOO ESTIMATOR F--ETCCU) JUN &I F...EXTREME RESPONSE PATTERNS 𔃺 FUMIKO SAMEJIMAr DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF TENNESSEE KNOXVILLE, TENN. 37916 JUNE, 1981 Prepared under the...contract number N00014-77-C-360, NRl 1蓺 with the Personnel and Training Research Programs Psychological Sciences Division Office of Naval Research

  11. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  12. Prospects of poverty eradication through the existing Zakat system in Pakistan.

    PubMed

    Mohammad, F

    1991-01-01

    In the Muslim system, Zakat functions as a means to reduce inequalities and eradicate poverty. Zakat means growth, extension, and purification. It is a usually annual premium charged on all accumulated productive wealth and on a variety of agricultural produce. Various rates are used. In the past, Zakat was paid on a self assessed basis and given to the needy. Due to influence on Sunni Muslims, in 1980 collection and disbursement was deemed the function of an Islamic state and the state system was introduced. The formal system is described in detail. A random sample (1050) of Local Zakat Committee (LZC) members, Zakat recipients, and the general population was conducted in 1988 to see to what extent poverty has been eradicated with this system. Zakat recipients were either those receiving a subsistence allowance or those receiving funds for permanent rehabilitation. Estimates of Zakat and Ushr (for agricultural produce) received and the maximum limit to collection and the maximum potential are given by region. Estimates are also given for the number of Mustahqueen-e-Zakat (MZ) (needy) by province. The total number is 5.46 million households, or 32.22% of all households in Pakistan, which is slightly higher than other prior estimates. Those receiving Zakat number 3.967 million or 23.43% of total households. Clearly not all those in need are receiving aid. The range of needy is 18.4% to 42.58% and could include those who are not poor but qualify for receiving Zakat according to Islamic principles. Estimates are given for the shortfall in funds needed to fill the gap. Other funding is needed to retrain MZ and estimates by province are generated to this end. It is clear that the present system needs to be reformed because the estimated funding requirements exceed the potential; there is a gap in the number needing aid and those receiving aid; and there is a gap in funds secured to rehabilitate and those requesting rehabilitation. To augment the system, it is suggested that Zakat exemptions be removed, stock in trade should be included, all agricultural produce should be included, subsistence should be given to only the most poor and disabled and the rest should receive a modest amount for starting a project on an annual rotation, and greater government emphasis at all levels must be placed on eliminating poverty.

  13. The energetic and nutritional yields from insectivory for Kasekela chimpanzees.

    PubMed

    O'Malley, Robert C; Power, Michael L

    2014-06-01

    Insectivory is hypothesized to be an important source of macronutrients, minerals, and vitamins for chimpanzees (Pan troglodytes), yet nutritional data based on actual intake are lacking. Drawing on observations from 2008 to 2010 and recently published nutritional assays, we determined the energy, macronutrient and mineral yields for termite-fishing (Macrotermes), ant-dipping (Dorylus), and ant-fishing (Camponotus) by the Kasekela chimpanzees of Gombe National Park, Tanzania. We also estimated the yields from consumption of weaver ants (Oecophylla) and termite alates (Macrotermes and Pseudacanthotermes). On days when chimpanzees were observed to prey on insects, the time spent in insectivorous behavior ranged from <1 min to over 4 h. After excluding partial bouts and those of <1 min duration, ant-dipping bouts were of significantly shorter duration than the other two forms of tool-assisted insectivory but provided the highest mass intake rate. Termite-fishing bouts were of significantly longer duration than ant-dipping and had a lower mass intake rate, but provided higher mean and maximum mass yields. Ant-fishing bouts were comparable to termite-fishing bouts in duration but had significantly lower mass intake rates. Mean and maximum all-day yields from termite-fishing and ant-dipping contributed to or met estimated recommended intake (ERI) values for a broad array of minerals. The mean and maximum all-day yields of other insects consistently contributed to the ERI only for manganese. All forms of insectivory provided small but probably non-trivial amounts of fat and protein. We conclude that different forms of insectivory have the potential to address different nutritional needs for Kasekela chimpanzees. Other than honeybees, insects have received little attention as potential foods for hominins. Our results suggest that ants and (on a seasonal basis) termites would have been viable sources of fat, high-quality protein and minerals for extinct hominins employing Pan-like subsistence technology in East African woodlands. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A biome-scale assessment of the impact of invasive alien plants on ecosystem services in South Africa.

    PubMed

    van Wilgen, B W; Reyers, B; Le Maitre, D C; Richardson, D M; Schonegevel, L

    2008-12-01

    This paper reports an assessment of the current and potential impacts of invasive alien plants on selected ecosystem services in South Africa. We used data on the current and potential future distribution of 56 invasive alien plant species to estimate their impact on four services (surface water runoff, groundwater recharge, livestock production and biodiversity) in five terrestrial biomes. The estimated reductions in surface water runoff as a result of current invasions were >3000 million m(3) (about 7% of the national total), most of which is from the fynbos (shrubland) and grassland biomes; the potential reductions would be more than eight times greater if invasive alien plants were to occupy the full extent of their potential range. Impacts on groundwater recharge would be less severe, potentially amounting to approximately 1.5% of the estimated maximum reductions in surface water runoff. Reductions in grazing capacity as a result of current levels of invasion amounted to just over 1% of the potential number of livestock that could be supported. However, future impacts could increase to 71%. A 'biodiversity intactness index' (the remaining proportion of pre-modern populations) ranged from 89% to 71% for the five biomes. With the exception of the fynbos biome, current invasions have almost no impact on biodiversity intactness. Under future levels of invasion, however, these intactness values decrease to around 30% for the savanna, fynbos and grassland biomes, but to even lower values (13% and 4%) for the two karoo biomes. Thus, while the current impacts of invasive alien plants are relatively low (with the exception of those on surface water runoff), the future impacts could be very high. While the errors in these estimates are likely to be substantial, the predicted impacts are sufficiently large to suggest that there is serious cause for concern.

  15. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  16. Unconventional shale-gas systems: The Mississippian Barnett Shale of north-central Texas as one model for thermogenic shale-gas assessment

    USGS Publications Warehouse

    Jarvie, D.M.; Hill, R.J.; Ruble, T.E.; Pollastro, R.M.

    2007-01-01

    Shale-gas resource plays can be distinguished by gas type and system characteristics. The Newark East gas field, located in the Fort Worth Basin, Texas, is defined by thermogenic gas production from low-porosity and low-permeability Barnett Shale. The Barnett Shale gas system, a self-contained source-reservoir system, has generated large amounts of gas in the key productive areas because of various characteristics and processes, including (1) excellent original organic richness and generation potential; (2) primary and secondary cracking of kerogen and retained oil, respectively; (3) retention of oil for cracking to gas by adsorption; (4) porosity resulting from organic matter decomposition; and (5) brittle mineralogical composition. The calculated total gas in place (GIP) based on estimated ultimate recovery that is based on production profiles and operator estimates is about 204 bcf/section (5.78 ?? 109 m3/1.73 ?? 104 m3). We estimate that the Barnett Shale has a total generation potential of about 609 bbl of oil equivalent/ac-ft or the equivalent of 3657 mcf/ac-ft (84.0 m3/m3). Assuming a thickness of 350 ft (107 m) and only sufficient hydrogen for partial cracking of retained oil to gas, a total generation potential of 820 bcf/section is estimated. Of this potential, approximately 60% was expelled, and the balance was retained for secondary cracking of oil to gas, if sufficient thermal maturity was reached. Gas storage capacity of the Barnett Shale at typical reservoir pressure, volume, and temperature conditions and 6% porosity shows a maximum storage capacity of 540 mcf/ac-ft or 159 scf/ton. Copyright ?? 2007. The American Association of Petroleum Geologists. All rights reserved.

  17. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  18. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  19. Model analysis and electrical characterization of atmospheric pressure cold plasma jet in pin electrode configuration

    NASA Astrophysics Data System (ADS)

    Deepak, G. Divya; Joshi, N. K.; Prakash, Ram

    2018-05-01

    In this study, both model analysis and electrical characterization of a dielectric barrier discharge based argon plasma jet have been carried at atmospheric pressure in a pin electrode configuration. The plasma and fluid dynamics modules of COMSOL multi-physics code have been used for the modeling of the plasma jet. The plasma parameters, such as, electron density, electron temperature and electrical potential have been analyzed with respect to the electrical parameters, i.e., supply voltage and supply frequency with and without the flow of gas. In all the experiments, gas flow rate has been kept constant at 1 liter per minute. This electrode configuration is subjected to a range of supply frequencies (10-25 kHz) and supply voltages (3.5-6.5 kV). The power consumed by the device has been estimated at different applied combinations (supply voltage & frequency) for optimum power consumption at maximum jet length. The maximum power consumed by the device in this configuration for maximum jet length of ˜26 mm is just ˜1 W.

  20. ESTIMATE OF SOLAR MAXIMUM USING THE 1-8 Å GEOSTATIONARY OPERATIONAL ENVIRONMENTAL SATELLITES X-RAY MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, L. M.; Balasubramaniam, K. S., E-mail: lwinter@aer.com

    We present an alternate method of determining the progression of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental Satellites (GOES) X-ray data in the 1-8 Å band from 1986 to the present, covering solar cycles 22, 23, and 24. The X-ray background level tracks the progression of the solar cycle through its maximum and minimum. Using the X-ray data, we can therefore make estimates of the solar cycle progression and the date of solar maximum. Based upon our analysis, we conclude that the Sun reached its hemisphere-averagedmore » maximum in solar cycle 24 in late 2013. This is within six months of the NOAA prediction of a maximum in spring 2013.« less

  1. Development of simple-to-apply biogas kinetic models for the co-digestion of food waste and maize husk.

    PubMed

    Owamah, H I; Izinyon, O C

    2015-10-01

    Biogas kinetic models are often used to characterize substrate degradation and prediction of biogas production potential. Most of these existing models are however difficult to apply to substrates they were not developed for since their applications are usually substrate specific. Biodegradability kinetic (BIK) model and maximum biogas production potential and stability assessment (MBPPSA) model were therefore developed in this study for better understanding of the anaerobic co-digestion of food waste and maize husk for biogas production. Biodegradability constant (k) was estimated as 0.11 d(-1) using the BIK model. The results of maximum biogas production potential (A) obtained using the MBPPSA model were found to be in good correspondence, both in value and trend with the results obtained using the popular but complex modified Gompertz model for digesters B-1, B-2, B-3, B-4, and B-5. The (If) value of MBPPSA model also showed that digesters B-3, B-4, and B-5 were stable, while B-1 and B-2 were inhibited/unstable. Similar stability observation was also obtained using the modified Gompertz model. The MBPPSA model can therefore be used as an alternative model for anaerobic digestion feasibility studies and plant design. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Peter

    An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.

  4. Evolution of the climatic tolerance and postglacial range changes of the most primitive orchids (Apostasioideae) within Sundaland, Wallacea and Sahul.

    PubMed

    Kolanowska, Marta; Mystkowska, Katarzyna; Kras, Marta; Dudek, Magdalena; Konowalik, Kamil

    2016-01-01

    The location of possible glacial refugia of six Apostasioideae representatives is estimated based on ecological niche modeling analysis. The distribution of their suitable niches during the last glacial maximum (LGM) is compared with their current potential and documented geographical ranges. The climatic factors limiting the studied species occurrences are evaluated and the niche overlap between the studied orchids is assessed and discussed. The predicted niche occupancy profiles and reconstruction of ancestral climatic tolerances suggest high level of phylogenetic niche conservatism within Apostasioideae.

  5. Opportunities for increasing natural gas production in the near term. Volume VI. The East Cameron Block 271 Field. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-02-01

    This report examines the potential for increasing the rate of production of natural gas from the East Cameron Block 271 Field in the Gulf of Mexico Outer Continental Shelf. Proved reserves are estimated using all available reservoir data, including well logs and pressure tests, and cost parameters typical in the area. Alternative schedules for future production are devised, and net present values calculated from which the maximum production rate that also maximizes net present value is determined.

  6. Multivariate regression methods for estimating velocity of ictal discharges from human microelectrode recordings

    NASA Astrophysics Data System (ADS)

    Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.

    2017-08-01

    Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically-observable EEG data, with a variety of straightforward computation methods available. This opens possibilities for systematic assessments of ictal discharge propagation in clinical and research settings.

  7. Antecedent wetness conditions based on ERS scatterometer data

    NASA Astrophysics Data System (ADS)

    Brocca, L.; Melone, F.; Moramarco, T.; Morbidelli, R.

    2009-01-01

    SummarySoil moisture is widely recognized as a key parameter in environmental processes mainly for the role of rainfall partitioning into runoff and infiltration. Therefore, for storm rainfall-runoff modeling the estimation of the antecedent wetness conditions ( AWC) is one of the most important aspect. In this context, this study investigates the potential of scatterometer on board of the ERS satellites for the assessment of wetness conditions in three Tiber sub-catchments (Central Italy), of which one includes an experimental area for soil moisture monitoring. The satellite soil moisture data are taken from the ERS/METOP soil moisture archive. First, the scatterometer-derived soil wetness index ( SWI) data are compared with two on-site soil moisture data sets acquired by different methodologies on areas of different extension ranging from 0.01 km 2 to ˜60 km 2. Moreover, the reliability of SWI to estimate the AWC at a catchment scale is investigated considering the relationship between SWI and the soil potential maximum retention parameter, S, of the Soil Conservation Service-Curve Number (SCS-CN) method for abstraction. Several flood events occurred from 1992 to 2005 are selected for this purpose. Specifically, the performance of the SWI for S estimation is compared with two antecedent precipitation indices ( API) and one base flow index ( BFI). The S values obtained through the observed direct runoff volume and rainfall depth are used as benchmark. Results show the great reliability of the SWI for the estimation of wetness conditions both at the plot and catchment scale despite the complex orography of the investigated areas. As far as the comparison with on site soil moisture data set is concerned, the SWI is found quite reliable in representing the soil moisture at layer depth of 15 cm, with a mean correlation coefficient equal to 0.81. The characteristic time length parameter variations, as expected, is depended on soil type, with values in accordance with previous studies. In terms of AWC assessment at catchment scale, based on selected flood events, the SWI is found highly correlated with the observed maximum potential retention of the SCS-CN method with a correlation coefficient R equal to -0.90. Besides, SWI in representing the AWC of the three investigated catchments, outperformed both API indices, poorly representative of AWC, and BFI. Finally, the classical SCS-CN method applied for direct runoff depth estimation, where S is assessed by SWI, provided good performance with a percentage error not exceeding ˜25% for 80% of investigated rainfall-runoff events.

  8. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  9. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  10. Methodology and Implications of Maximum Paleodischarge Estimates for

    USGS Publications Warehouse

    Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.

    1998-01-01

    Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.

  11. Psychometric Properties of IRT Proficiency Estimates

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Tong, Ye

    2010-01-01

    Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…

  12. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  13. Average Potential Temperature of the Upper Mantle and Excess Temperatures Beneath Regions of Active Upwelling

    NASA Astrophysics Data System (ADS)

    Putirka, K. D.

    2006-05-01

    The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and overlap within 1 sigma estimates from phase transitions at the 410 km (Jeanloz and Thompson, 1983) and 670 km (Hirose, 2002) seismic discontinuities. Variations in MORB FeOliq can be used to calculate the variance of TpMORB. FeOliq variations in global MORB show that 95% of the sub-MORB mantle has a T range of 165°C; 68% of MORB fall within temperature variations of ±30°C. In comparison, Te at Hawaii and Iceland are 1706°C and 1646°C respectively, and hence Te> is 248°C at Hawaii and 188°C at Iceland. Tp estimates at Hawaii and Iceland also exceed maximum Tp estimates at MORs (at 95% level) by 171 and 111°C respectively. These Te are in agreement with estimates derived from excess topography and dynamic models of mantle flow and melt generation (e.g., Sleep, 1990, Schilling, 1991, Ito et al., 1999). A clear result is that Hawaii and Iceland are hot relative to MORB. Rayleigh number calculations further show that for these Te, critical depths (i.e., the depths at which Ra > 1000) are < 130 km. Hawaii and Iceland are thus almost assuredly the result of thermally driven, active upwellings, or mantle plumes.

  14. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  15. Estimating potency for the Emax-model without attaining maximal effects.

    PubMed

    Schoemaker, R C; van Gerven, J M; Cohen, A F

    1998-10-01

    The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.

  16. The potential vulnerability of the Namib and Nama Aquifers due to low recharge levels in the area surrounding the Naukluft Mountains, SW Namibia

    NASA Astrophysics Data System (ADS)

    Kambinda, Winnie N.; Mapani, Benjamin

    2017-12-01

    The Naukluft Mountains in the Namib Desert are a high rainfall-high discharge area. It sees increased stream-, spring-flow as well as waterfalls during the rainy season. The mountains are a major resource for additional recharge to the Namib and Nama aquifers that are adjacent to the mountains. This paper aimed to highlight the potential vulnerability of the aquifers that surround the Naukluft Mountain area; if the strategic importance of the Naukluft Karst Aquifer (NKA) for bulk water supply becomes necessary. Chloride Mass Balance Method (CMBM) was applied to estimate rainfall available for recharge as well as actual recharge thereof. This was applied using chloride concentration in precipitation, borehole and spring samples collected from the study area. Groundwater flow patterns were mapped from hydraulic head values. A 2D digital elevation model was developed using Arc-GIS. Results highlighted the influence of the NKA on regional groundwater flow. This paper found that groundwater flow was controlled by structural dip and elevation. Groundwater was observed to flow predominantly from the NKA to the south west towards the Namib Aquifer in two distinct flow patterns that separate at the center of the NKA. A distinct groundwater divide was defined between the two flow patterns. A minor flow pattern from the northern parts of the NKA to the north east towards the Nama Aquifer was validated. Due to the substantial water losses, the NKA is not a typical karst aquifer. While the project area receives an average rainfall of 170.36 mm/a, it was estimated that 1-14.24% (maximum 24.43 mm/a) rainfall was available for recharge to the NKA. Actual recharge to the NKA was estimated to be less than 1-18.21% (maximum 4.45 mm/a) reflecting the vast losses incurred by the NKA via discharge. This paper concluded that groundwater resources of the NKA were potentially finite. The possibility of developing the aquifer for bulk water supply would therefore drastically lower recharge to surrounding aquifers that sustain local populations because all received rainfall will be utilized to maximise recharge to the NKA instead of surrounding aquifers.

  17. [Estimation of Maximum Entrance Skin Dose during Cerebral Angiography].

    PubMed

    Kawauchi, Satoru; Moritake, Takashi; Hayakawa, Mikito; Hamada, Yusuke; Sakuma, Hideyuki; Yoda, Shogo; Satoh, Masayuki; Sun, Lue; Koguchi, Yasuhiro; Akahane, Keiichi; Chida, Koichi; Matsumaru, Yuji

    2015-09-01

    Using radio-photoluminescence glass dosimeter, we measured the entrance skin dose (ESD) in 46 cases and analyzed the correlations between maximum ESD and angiographic parameters [total fluoroscopic time (TFT); number of digital subtraction angiography (DSA) frames, air kerma at the interventional reference point (AK), and dose-area product (DAP)] to estimate the maximum ESD in real time. Mean (± standard deviation) maximum ESD, dose of the right lens, and dose of the left lens were 431.2 ± 135.8 mGy, 33.6 ± 15.5 mGy, and 58.5 ± 35.0 mGy, respectively. Correlation coefficients (r) between maximum ESD and TFT, number of DSA frames, AK, and DAP were r=0.379 (P<0.01), r=0.702 (P<0.001), r=0.825 (P<0.001), and r=0.709 (P<0.001), respectively. AK was identified as the most useful parameter for real-time prediction of maximum ESD. This study should contribute to the development of new diagnostic reference levels in our country.

  18. The maximum economic depth of groundwater abstraction for irrigation

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.

  19. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    USGS Publications Warehouse

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.

  20. Floating plastic debris in the Central and Western Mediterranean Sea.

    PubMed

    Ruiz-Orejón, Luis F; Sardá, Rafael; Ramis-Pujol, Juan

    2016-09-01

    In two sea voyages throughout the Mediterranean (2011 and 2013) that repeated the historical travels of Archduke Ludwig Salvator of Austria (1847-1915), 71 samples of floating plastic debris were obtained with a Manta trawl. Floating plastic was observed in all the sampled sites, with an average weight concentration of 579.3 g dw km(-2) (maximum value of 9298.2 g dw km(-2)) and an average particle concentration of 147,500 items km(-2) (the maximum concentration was 1,164,403 items km(-2)). The plastic size distribution showed microplastics (<5 mm) in all the samples. The most abundant particles had a surface area of approximately 1 mm(2) (the mesh size was 333 μm). The general estimate obtained was a total value of 1455 tons dw of floating plastic in the entire Mediterranean region, with various potential spatial accumulation areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    PubMed

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  2. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  3. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  4. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  5. Fusion of real-time simulation, sensing, and geo-informatics in assessing tsunami impact

    NASA Astrophysics Data System (ADS)

    Koshimura, S.; Inoue, T.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.; Gokon, H.

    2015-12-01

    Bringing together state-of-the-art high-performance computing, remote sensing and spatial information sciences, we establish a method of real-time tsunami inundation forecasting, damage estimation and mapping to enhance disaster response. Right after a major (near field) earthquake is triggered, we perform a real-time tsunami inundation forecasting with use of high-performance computing platform (Koshimura et al., 2014). Using Tohoku University's vector supercomputer, we accomplished "10-10-10 challenge", to complete tsunami source determination in 10 minutes, tsunami inundation modeling in 10 minutes with 10 m grid resolution. Given the maximum flow depth distribution, we perform quantitative estimation of exposed population using census data and mobile phone data, and the numbers of potential death and damaged structures by applying tsunami fragility curve. After the potential tsunami-affected areas are estimated, the analysis gets focused and moves on to the "detection" phase using remote sensing. Recent advances of remote sensing technologies expand capabilities of detecting spatial extent of tsunami affected area and structural damage. Especially, a semi-automated method to estimate building damage in tsunami affected areas is developed using pre- and post-event high-resolution SAR (Synthetic Aperture Radar) data. The method is verified through the case studies in the 2011 Tohoku and other potential tsunami scenarios, and the prototype system development is now underway in Kochi prefecture, one of at-risk coastal city against Nankai trough earthquake. In the trial operation, we verify the capability of the method as a new tsunami early warning and response system for stakeholders and responders.

  6. Distribution coefficient values describing iodine, neptunium, selenium, technetium, and uranium sorption to Hanford sediments. Supplement 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, D.I.; Seme, R.J.

    1995-03-01

    Burial of vitrified low-level waste (LLW) in the vadose zone of the Hanford Site is being considered as a long-term disposal option. Regulations dealing with LLW disposal require that performance assessment (PA) analyses be conducted. Preliminary modeling efforts for the Hanford Site LLW PA were conducted to evaluate the potential health risk of a number of radionuclides, including Ac, Am, C, Ce, Cm, Co, Cs, Eu, 1, Nb, Ni, Np, Pa, Pb, Pu, Ra, Ru, Se, Sn, Sr, Tc, Th, U, and Zr (Piepho et al. 1994). The radionuclides, {sup 129}I, {sup 237}Np, {sup 79}Se, {sup 99}Tc, and {sup 234,235,238}U,more » were identified as posing the greatest potential health hazard. It was also determined that the outcome of these simulations were very sensitive to the parameter describing the extent to which radionuclides sorbed to the subsurface matrix, described as a distribution coefficient (K{sub d}). The distribution coefficient is a ratio of the radionuclide concentration associated with the solid phase to that in the liquid phase. The literature-derived K{sub d} values used in these simulations were conservative, i.e., lowest values within the range of reasonable values used to provide an estimate of the maximum health threat. Thus, these preliminary modeling results reflect a conservative estimate rather than a best estimate of what is likely to occur. The potential problem with providing only a conservative estimate is that it may mislead us into directing resources to resolve nonexisting problems.« less

  7. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    PubMed

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  8. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  9. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  10. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Treesearch

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  11. Minimax estimation of qubit states with Bures risk

    NASA Astrophysics Data System (ADS)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  12. Estimating landscape carrying capacity through maximum clique analysis

    USGS Publications Warehouse

    Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.

    2012-01-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.

  13. Nutrient Removal through Oyster Habitat Restoration in the Indian River Lagoon, Florida

    NASA Astrophysics Data System (ADS)

    Gallagher, S. M.; Schmidt, C. A.; Walters, L.; Blank, R.

    2017-12-01

    In 2016, an algae bloom in the Indian River Lagoon (IRL) caused a state of emergency in Florida. As with many estuaries, nutrient loading in the IRL has led to periodic eutrophication. While previous studies have shown oyster bed restoration reduces suspended organic matter in estuaries, similar reductions to net nutrient loads are not well established. In addition, previous studies have focused on seasonal variation rather than ongoing yearly effects. Here, we determine the net nitrogen and phosphorus effects of oyster restoration in the IRL over seven years. Analysis of aerial images from 1943 and 2009 showed 14.7 ha of oyster beds were destroyed by boat traffic in the IRL (40% loss). According to our measurements of restored oyster bed sediment, this equates to a maximum of 1,580,000 kg•N•yr-1 of lost denitrification potential; this is equivalent to 150% of estimated current nitrogen loading in the IRL. Oyster restoration began in the IRL in 2007 and has recovered 7.7% of the lost beds and denitrification potential (1.13 ha and 107,000 kg•N•yr-1•ha-1). In all cases, denitrification reached a maximum within two years and remained significantly higher than open sediment for at least the seven years observed. Denitrification benefits came at the cost of mobilizing a maximum of 3450 kg ha-1 of recalcitrant phosphorus from restored bed sediment. This effect was limited to the two years following restoration, whereas increased denitrification was ongoing. Overall, our results show oyster restoration achieved maximum denitrification within two years and maintained significant denitrification benefits for at least seven years. In addition, our results are useful for future oyster restoration projects since they quantify nitrogen benefits in terms of phosphorus mobilization.

  14. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  15. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  17. Potential ability of zeolite to generate high-temperature vapor using waste heat

    NASA Astrophysics Data System (ADS)

    Fukai, Jun; Wijayanta, Agung Tri

    2018-02-01

    In various material product industries, a large amount of high temperature steam as heat sources are produced from fossil fuel, then thermal energy retained by condensed water at lower than 100°C are wasted. Thermal energies retained by exhaust gases at lower than 200°C are also wasted. Effective utilization of waste heat is believed to be one of important issues to solve global problems of energy and environment. Zeolite/water adsorption systems are introduced to recover such low-temperature waste heats in this study. Firstly, an adsorption steam recovery system was developed to generate high temperature steam from unused hot waste heat. The system used a new principle that adsorption heat of zeolite/water contact was efficiently extracted. A bench-scaled system was constructed, demonstrating contentious generation of saturated steam nearly 150°C from hot water at 80°C. Energy conservation is expected by returning the generated steam to steam lines in the product processes. Secondly, it was demonstrated that superheated steam/vapor at higher than 200°C could be generated from those at nearly 120°C using a laboratory-scaled setup. The maximum temperature and the time variation of output temperature were successfully estimated using macroscopic heat balances. Lastly, the maximum temperatures were estimated whose saturate air at the relative humidity 20-80% were heated by the present system. Theoretically, air at higher than 200°C was generated from saturate air at higher than 70°C. Consequently, zeolite/water adsorption systems have potential ability to regenerate thermal energy of waste water and exhaust gases.

  18. Intra-community implications of implementing multiple tsunami-evacuation zones in Alameda, California

    USGS Publications Warehouse

    Peters, Jeff; Wood, Nathan J.; Wilson, Rick; Miller, Kevin

    2016-01-01

    Tsunami-evacuation planning in coastal communities is typically based on maximum evacuation zones for a single scenario or a composite of sources; however, this approach may over-evacuate a community and overly disrupt the local economy and strain emergency-service resources. To minimize the potential for future over-evacuations, multiple evacuation zones based on arrival time and inundation extent are being developed for California coastal communities. We use the coastal city of Alameda, California (USA), as a case study to explore population and evacuation implications associated with multiple tsunami-evacuation zones. We use geospatial analyses to estimate the number and type of people in each tsunami-evacuation zone and anisotropic pedestrian evacuation models to estimate pedestrian travel time out of each zone. Results demonstrate that there are tens of thousands of individuals in tsunami-evacuation zones on the two main islands of Alameda, but they will likely have sufficient time to evacuate before wave arrival. Quality of life could be impacted by the high number of government offices, schools, day-care centers, and medical offices in certain evacuation zones and by potentially high population density at one identified safe area after an evacuation. Multi-jurisdictional evacuation planning may be warranted, given that many at-risk individuals may need to evacuate to neighboring jurisdictions. The use of maximum evacuation zones for local tsunami sources may be warranted given the limited amount of available time to confidently recommend smaller zones which would result in fewer evacuees; however, this approach may also result in over-evacuation and the incorrect perception that successful evacuations are unlikely.

  19. The role of reservoir storage in large-scale surface water availability analysis for Europe

    NASA Astrophysics Data System (ADS)

    Garrote, L. M.; Granados, A.; Martin-Carrasco, F.; Iglesias, A.

    2017-12-01

    A regional assessment of current and future water availability in Europe is presented in this study. The assessment was made using the Water Availability and Adaptation Policy Analysis (WAAPA) model. The model was built on the river network derived from the Hydro1K digital elevation maps, including all major river basins of Europe. Reservoir storage volume was taken from the World Register of Dams of ICOLD, including all dams with storage capacity over 5 hm3. Potential Water Availability is defined as the maximum amount of water that could be supplied at a certain point of the river network to satisfy a regular demand under pre-specified reliability requirements. Water availability is the combined result of hydrological processes, which determine streamflow in natural conditions, and human intervention, which determines the available hydraulic infrastructure to manage water and establishes water supply conditions through operating rules. The WAAPA algorithm estimates the maximum demand that can be supplied at every node of the river network accounting for the regulation capacity of reservoirs under different management scenarios. The model was run for a set of hydrologic scenarios taken from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), where the PCRGLOBWB hydrological model was forced with results from five global climate models. Model results allow the estimation of potential water stress by comparing water availability to projections of water abstractions along the river network under different management alternatives. The set of sensitivity analyses performed showed the effect of policy alternatives on water availability and highlighted the large uncertainties linked to hydrological and anthropological processes.

  20. Natural climate solutions

    NASA Astrophysics Data System (ADS)

    Griscom, Bronson W.; Adams, Justin; Ellis, Peter W.; Houghton, Richard A.; Lomax, Guy; Miteva, Daniela A.; Schlesinger, William H.; Shoch, David; Siikamäki, Juha V.; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T.; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R.; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M.; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E.; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph

    2017-10-01

    Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify “natural climate solutions” (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS—when constrained by food security, fiber security, and biodiversity conservation—is 23.8 petagrams of CO2 equivalent (PgCO2e) y‑1 (95% CI 20.3–37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO2e y‑1) represents cost-effective climate mitigation, assuming the social cost of CO2 pollution is ≥100 USD MgCO2e‑1 by 2030. Natural climate solutions can provide 37% of cost-effective CO2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO2‑1. Most NCS actions—if effectively implemented—also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change.

  1. Natural climate solutions.

    PubMed

    Griscom, Bronson W; Adams, Justin; Ellis, Peter W; Houghton, Richard A; Lomax, Guy; Miteva, Daniela A; Schlesinger, William H; Shoch, David; Siikamäki, Juha V; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph

    2017-10-31

    Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify "natural climate solutions" (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS-when constrained by food security, fiber security, and biodiversity conservation-is 23.8 petagrams of CO 2 equivalent (PgCO 2 e) y -1 (95% CI 20.3-37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO 2 e y -1 ) represents cost-effective climate mitigation, assuming the social cost of CO 2 pollution is ≥100 USD MgCO 2 e -1 by 2030. Natural climate solutions can provide 37% of cost-effective CO 2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO 2 -1 Most NCS actions-if effectively implemented-also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change.

  2. Natural climate solutions

    PubMed Central

    Adams, Justin; Ellis, Peter W.; Houghton, Richard A.; Lomax, Guy; Miteva, Daniela A.; Schlesinger, William H.; Shoch, David; Siikamäki, Juha V.; Smith, Pete; Woodbury, Peter; Zganjar, Chris; Blackman, Allen; Campari, João; Conant, Richard T.; Delgado, Christopher; Elias, Patricia; Gopalakrishna, Trisha; Hamsik, Marisa R.; Herrero, Mario; Kiesecker, Joseph; Landis, Emily; Laestadius, Lars; Leavitt, Sara M.; Minnemeyer, Susan; Polasky, Stephen; Potapov, Peter; Putz, Francis E.; Sanderman, Jonathan; Silvius, Marcel; Wollenberg, Eva; Fargione, Joseph

    2017-01-01

    Better stewardship of land is needed to achieve the Paris Climate Agreement goal of holding warming to below 2 °C; however, confusion persists about the specific set of land stewardship options available and their mitigation potential. To address this, we identify and quantify “natural climate solutions” (NCS): 20 conservation, restoration, and improved land management actions that increase carbon storage and/or avoid greenhouse gas emissions across global forests, wetlands, grasslands, and agricultural lands. We find that the maximum potential of NCS—when constrained by food security, fiber security, and biodiversity conservation—is 23.8 petagrams of CO2 equivalent (PgCO2e) y−1 (95% CI 20.3–37.4). This is ≥30% higher than prior estimates, which did not include the full range of options and safeguards considered here. About half of this maximum (11.3 PgCO2e y−1) represents cost-effective climate mitigation, assuming the social cost of CO2 pollution is ≥100 USD MgCO2e−1 by 2030. Natural climate solutions can provide 37% of cost-effective CO2 mitigation needed through 2030 for a >66% chance of holding warming to below 2 °C. One-third of this cost-effective NCS mitigation can be delivered at or below 10 USD MgCO2−1. Most NCS actions—if effectively implemented—also offer water filtration, flood buffering, soil health, biodiversity habitat, and enhanced climate resilience. Work remains to better constrain uncertainty of NCS mitigation estimates. Nevertheless, existing knowledge reported here provides a robust basis for immediate global action to improve ecosystem stewardship as a major solution to climate change. PMID:29078344

  3. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-09-20

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  4. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    PubMed Central

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-01-01

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069

  5. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  6. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  7. Estimation of Compaction Parameters Based on Soil Classification

    NASA Astrophysics Data System (ADS)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  8. Using Atmosphere-Forest Measurements To Examine The Potential For Reduced Downwind Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viner, B.

    2015-10-13

    A 2-D dispersion model was developed to address how airborne plumes interact with the forest at Savannah River Site. Parameters describing turbulence and mixing of the atmosphere within and just above the forest were estimated using measurements of water vapor or carbon dioxide concentration made at the Aiken AmeriFlux tower for a range of stability and seasonal conditions. Stability periods when the greatest amount of mixing of an airborne plume into the forest were found for 1) very unstable environments, when atmospheric turbulence is usually at a maximum, and 2) very stable environments, when the plume concentration at the forestmore » top is at a maximum and small amounts of turbulent mixing can move a substantial portion of the plume into the forest. Plume interactions with the forest during stable periods are of particular importance because these conditions are usually considered the worst-case scenario for downwind effects from a plume. The pattern of plume mixing into the forest was similar during the year except during summer when the amount of plume mixed into the forest was nearly negligible for all but stable periods. If the model results indicating increased deposition into the forest during stable conditions can be confirmed, it would allow for a reduction in the limitations that restrict facility operations while maintaining conservative estimates for downwind effects. Continuing work is planned to confirm these results as well as estimate specific deposition velocity values for use in toolbox models used in regulatory roles.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G P; Logan, C M

    We have estimated interference from external background radiation for a computed tomography (CT) scanner. Our intention is to estimate the interference that would be expected for the high-resolution SkyScan 1072 desk-top x-ray microtomography system. The SkyScan system uses a Microfocus x-ray source capable of a 10-{micro}m focal spot at a maximum current of 0.1 mA and a maximum energy of 130 kVp. All predictions made in this report assume using the x-ray source at the smallest spot size, maximum energy, and operating at the maximum current. Some of the systems basic geometry that is used for these estimates are: (1)more » Source-to-detector distance: 250 mm, (2) Minimum object-to-detector distance: 40 mm, and (3) Maximum object-to-detector distance: 230 mm. This is a first-order, rough estimate of the quantity of interference expected at the system detector caused by background radiation. The amount of interference is expressed by using the ratio of exposure expected at the detector of the CT system. The exposure values for the SkyScan system are determined by scaling the measured values of an x-ray source and the background radiation adjusting for the difference in source-to-detector distance and current. The x-ray source that was used for these measurements was not the SkyScan Microfocus x-ray tube. Measurements were made using an x-ray source that was operated at the same applied voltage but higher current for better statistics.« less

  10. Global estimation of long-term persistence in annual river runoff

    NASA Astrophysics Data System (ADS)

    Markonis, Y.; Moustakis, Y.; Nasika, C.; Sychova, P.; Dimitriadis, P.; Hanel, M.; Máca, P.; Papalexiou, S. M.

    2018-03-01

    Long-term persistence (LTP) of annual river runoff is a topic of ongoing hydrological research, due to its implications to water resources management. Here, we estimate its strength, measured by the Hurst coefficient H, in 696 annual, globally distributed, streamflow records with at least 80 years of data. We use three estimation methods (maximum likelihood estimator, Whittle estimator and least squares variance) resulting in similar mean values of H close to 0.65. Subsequently, we explore potential factors influencing H by two linear (Spearman's rank correlation, multiple linear regression) and two non-linear (self-organizing maps, random forests) techniques. Catchment area is found to be crucial for medium to larger watersheds, while climatic controls, such as aridity index, have higher impact to smaller ones. Our findings indicate that long-term persistence is weaker than found in other studies, suggesting that enhanced LTP is encountered in large-catchment rivers, were the effect of spatial aggregation is more intense. However, we also show that the estimated values of H can be reproduced by a short-term persistence stochastic model such as an auto-regressive AR(1) process. A direct consequence is that some of the most common methods for the estimation of H coefficient, might not be suitable for discriminating short- and long-term persistence even in long observational records.

  11. Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers

    PubMed Central

    Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.

    2012-01-01

    Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559

  12. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  13. Beyond SaGMRotI: Conversion to SaArb, SaSN, and SaMaxRot

    USGS Publications Warehouse

    Watson-Lamprey, J. A.; Boore, D.M.

    2007-01-01

    In the seismic design of structures, estimates of design forces are usually provided to the engineer in the form of elastic response spectra. Predictive equations for elastic response spectra are derived from empirical recordings of ground motion. The geometric mean of the two orthogonal horizontal components of motion is often used as the response value in these predictive equations, although it is not necessarily the most relevant estimate of forces within the structure. For some applications it is desirable to estimate the response value on a randomly chosen single component of ground motion, and in other applications the maximum response in a single direction is required. We give adjustment factors that allow converting the predictions of geometric-mean ground-motion predictions into either of these other two measures of seismic ground-motion intensity. In addition, we investigate the relation of the strike-normal component of ground motion to the maximum response values. We show that the strike-normal component of ground motion seldom corresponds to the maximum horizontal-component response value (in particular, at distances greater than about 3 km from faults), and that focusing on this case in exclusion of others can result in the underestimation of the maximum component. This research provides estimates of the maximum response value of a single component for all cases, not just near-fault strike-normal components. We provide modification factors that can be used to convert predictions of ground motions in terms of the geometric mean to the maximum spectral acceleration (SaMaxRot) and the random component of spectral acceleration (SaArb). Included are modification factors for both the mean and the aleatory standard deviation of the logarithm of the motions.

  14. Crash avoidance potential of four large truck technologies.

    PubMed

    Jermakian, Jessica S

    2012-11-01

    The objective of this paper was to estimate the maximum potential large truck crash reductions in the United States associated with each of four crash avoidance technologies: side view assist, forward collision warning/mitigation, lane departure warning/prevention, and vehicle stability control. Estimates accounted for limitations of current systems. Crash records were extracted from the 2004-08 files of the National Automotive Sampling System General Estimates System (NASS GES) and the Fatality Analysis Reporting System (FARS). Crash descriptors such as location of damage on the vehicle, road characteristics, time of day, and precrash maneuvers were reviewed to determine whether the information or action provided by each technology potentially could have prevented the crash. Of the four technologies, side view assist had the greatest potential for preventing large truck crashes of any severity; the technology is potentially applicable to 39,000 crashes in the United States each year, including 2000 serious and moderate injury crashes and 79 fatal crashes. Vehicle stability control is another promising technology, with the potential to prevent or mitigate up to 31,000 crashes per year including more serious crashes--up to 7000 moderate-to-serious injury crashes and 439 fatal crashes per year. Vehicle stability control could prevent or mitigate up to 20 and 11 percent of moderate-to-serious injury and fatal large truck crashes, respectively. Forward collision warning has the potential to prevent as many as 31,000 crashes per year, including 3000 serious and moderate injury crashes and 115 fatal crashes. Finally, 10,000 large truck crashes annually were relevant to lane departure warning/prevention systems. Of these, 1000 involved serious and moderate injuries and 247 involved fatal injuries. There is great potential effectiveness for truck-based crash avoidance systems. However, it is yet to be determined how drivers will interact with the systems. Actual effectiveness of crash avoidance systems will not be known until sufficient real-world experience has been gained. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Impact of air temperature on physically-based maximum precipitation estimation through change in moisture holding capacity of air

    NASA Astrophysics Data System (ADS)

    Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.

    2018-01-01

    Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.

  16. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  17. An estimate of the magnitude and trend of HIV/AIDS epidemic using data from the routine VCT services as an alternative data source to ANC sentinel surveillance in Addis Ababa, Ethiopia.

    PubMed

    Getachew, Yehenew; Gotu, Butte; Enquselassie, Fikre

    2010-10-01

    Since early 1980s when AIDS was first recognized, there has been uncertainty about the future trend and the ultimate dimensions of the pandemic. This uncertainty persists because of difficulties in measuring HIV incidence and prevalence with a substantial degree of precision in a given population. One of the many factors for the lack of precision is the problem of obtaining representative data sources that can be extrapolated to the general population. National and regional HIV estimates for Ethiopia are derived from ANC based HIV surveillance data. Alternative data sources have not been exhaustively explored as potential tools to monitor the trend of HIV/ AIDS epidemic in the country. To estimate the magnitude and trend of HIV/AIDS epidemic using data from the routine VCT services as an alternative data source to ANC sentinel surveillance data. The study used secondary data sources from all government, private and NGO VCT centers, of the period of 2003-2005 in Addis Ababa. For the purpose of making comparative analysis of the VCT based estimations and projections, records of all five sentinel sites in Addis Ababa for the period 1983-2003 were reviewed. Both ANC and VCT data sources showed similar and regular trends from the beginning of the HIV epidemic till the year 1995 where the ANC showed a relatively higher prevalence rates than VCT data, with a maximum difference in HIV prevalence of 1.06% in 1993. However, a higher HIV prevalence was noted for the VCT than the ANC data source for the period of 1996-2002, with a maximum difference of 1.4% in 1998, the year when both the ANC and VCT modeled HIV prevalence reached the highest peak in Addis Ababa. On the contrary, the ANC based prevalence was higher than the VCT data for the period 2004-2010, with a maximum difference of 2.2%. This study suggests that VCT based HIV prevalence data closely approximates the ANC based data. Therefore VCT data source can be valuable to complement the ANC data in monitoring the HIV epidemic and trend.

  18. Modeling spatiotemporal dynamics of global wetlands: comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin

    2016-03-01

    Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.

  19. Estimating Rhododendron maximum L. (Ericaceae) Canopy Cover Using GPS/GIS Technology

    Treesearch

    Tyler J. Tran; Katherine J. Elliott

    2012-01-01

    In the southern Appalachians, Rhododendron maximum L. (Ericaceae) is a key evergreen understory species, often forming a subcanopy in forest stands. Little is known about the significance of R. maximum cover in relation to other forest structural variables. Only recently have studies used Global Positioning System (GPS) technology...

  20. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  1. Estimation and comparison of potential runoff-contributing areas in Kansas using topographic, soil, and land-use information

    USGS Publications Warehouse

    Juracek, Kyle E.

    2000-01-01

    Digital topographic, soil, and land-use information was used to estimate potential runoff-contributing areas in Kansas. The results were used to compare 91 selected subbasins representing slope, soil, land-use, and runoff variability across the State. Potential runoff-contributing areas were estimated collectively for the processes of infiltration-excess and saturation-excess overland flow using a set of environmental conditions that represented, in relative terms, very high, high, moderate, low, very low, and extremely low potential for runoff. Various rainfall-intensity and soil-permeability values were used to represent the threshold conditions at which infiltration-excess overland flow may occur. Antecedent soil-moisture conditions and a topographic wetness index (TWI) were used to represent the threshold conditions at which saturation-excess overland flow may occur. Land-use patterns were superimposed over the potential runoff-contributing areas for each set of environmental conditions. Results indicated that the very low potential-runoff conditions (soil permeability less than or equal to 1.14 inches per hour and TWI greater than or equal to 14.4) provided the best statewide ability to quantitatively distinguish subbasins as having relatively high, moderate, or low potential for runoff on the basis of the percentage of potential runoff-contributing areas within each subbasin. The very low and (or) extremely low potential-runoff conditions (soil permeability less than or equal to 0.57 inch per hour and TWI greater than or equal to 16.3) provided the best ability to qualitatively compare potential for runoff among areas within individual subbasins. The majority of subbasins with relatively high potential for runoff are located in the eastern half of the State where soil permeability is generally less and precipitation is typically greater. The ability to distinguish subbasins as having relatively high, moderate, or low potential for runoff was possible mostly due to the variability of soil permeability across the State. The spatial distribution of potential contributing areas, in combination with the superimposed land-use patterns, may be used to help identify and prioritize subbasin areas for the implementation of best-management practices to manage runoff and meet Federally mandated total maximum daily load requirements.

  2. Assessing the variability of glacier lake bathymetries and potential peak discharge based on large-scale measurements in the Cordillera Blanca, Peru

    NASA Astrophysics Data System (ADS)

    Cochachin, Alejo; Huggel, Christian; Salazar, Cesar; Haeberli, Wilfried; Frey, Holger

    2015-04-01

    Over timescales of hundreds to thousands of years ice masses in mountains produced erosion in bedrock and subglacial sediment, including the formation of overdeepenings and large moraine dams that now serve as basins for glacial lakes. Satellite based studies found a total of 8355 glacial lakes in Peru, whereof 830 lakes were observed in the Cordillera Blanca. Some of them have caused major disasters due to glacial lake outburst floods in the past decades. On the other hand, in view of shrinking glaciers, changing water resources, and formation of new lakes, glacial lakes could have a function as water reservoirs in the future. Here we present unprecedented bathymetric studies of 124 glacial lakes in the Cordillera Blanca, Huallanca, Huayhuash and Raura in the regions of Ancash, Huanuco and Lima. Measurements were carried out using a boat equipped with GPS, a total station and an echo sounder to measure the depth of the lakes. Autocad Civil 3D Land and ArcGIS were used to process the data and generate digital topographies of the lake bathymetries, and analyze parameters such as lake area, length and width, and depth and volume. Based on that, we calculated empirical equations for mean depth as related to (1) area, (2) maximum length, and (3) maximum width. We then applied these three equations to all 830 glacial lakes of the Cordillera Blanca to estimate their volumes. Eventually we used three relations from the literature to assess the peak discharge of potential lake outburst floods, based on lake volumes, resulting in 3 x 3 peak discharge estimates. In terms of lake topography and geomorphology results indicate that the maximum depth is located in the center part for bedrock lakes, and in the back part for lakes in moraine material. Best correlations are found for mean depth and maximum width, however, all three empirical relations show a large spread, reflecting the wide range of natural lake bathymetries. Volumes of the 124 lakes with bathymetries amount to 0.9 km3 while the volume of all glacial lakes of the Cordillera Blanca ranges between 1.15 and 1.29 km3. The small difference in volume of the large lake sample as compared to the smaller sample of bathymetrically surveyed lakes is due to the large size of the measured lakes. The different distributions for lake volume and peak discharge indicate the range of variability in such estimates, and provides valuable first-order information for management and adaptation efforts in the field of water resources and flood prevention.

  3. Volatile organic compounds in pesticide formulations: Methods to estimate ozone formation potential

    NASA Astrophysics Data System (ADS)

    Zeinali, Mazyar; McConnell, Laura L.; Hapeman, Cathleen J.; Nguyen, Anh; Schmidt, Walter F.; Howard, Cody J.

    2011-05-01

    The environmental fate and toxicity of active ingredients in pesticide formulations has been investigated for many decades, but relatively little research has been conducted on the fate of pesticide co-formulants or inerts. Some co-formulants are volatile organic compounds (VOCs) and can contribute to ground-level ozone pollution. Effective product assessment methods are required to reduce emissions of the most reactive VOCs. Six emulsifiable concentrate pesticide products were characterized for percent VOC by thermogravimetric analysis (TGA) and gas chromatography-mass spectrometry (GC-MS). TGA estimates exceeded GC-MS by 10-50% in all but one product, indicating that for some products a fraction of active ingredient is released during TGA or that VOC contribution was underestimated by GC-MS. VOC profiles were examined using TGA-Fourier transform infrared (FTIR) evolved gas analysis and were compared to GC-MS results. The TGA-FTIR method worked best for products with the simplest and most volatile formulations, but could be developed into an effective product screening tool. An ozone formation potential ( OFP) for each product was calculated using the chemical composition from GC-MS and published maximum incremental reactivity ( MIR) values. OFP values ranged from 0.1 to 3.1 g ozone g -1 product. A 24-h VOC emission simulation was developed for each product assuming a constant emission rate calculated from an equation relating maximum flux rate to vapor pressure. Results indicate 100% VOC loss for some products within a few hours, while other products containing less volatile components will remain in the field for several days after application. An alternate method to calculate a product OFP was investigated utilizing the fraction of the total mass of each chemical emitted at the end of the 24-h simulation. The ideal assessment approach will include: 1) unambiguous chemical composition information; 2) flexible simulation models to estimate emissions under different management practices; and 3) accurate reactivity predictions.

  4. A double-gaussian, percentile-based method for estimating maximum blood flow velocity.

    PubMed

    Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D

    2013-11-01

    Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.

  5. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    PubMed

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Robust estimation-free prescribed performance back-stepping control of air-breathing hypersonic vehicles without affine models

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi

    2016-11-01

    This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.

  7. Seasonal LAI in slash pine estimated with LANDSAT TM

    NASA Technical Reports Server (NTRS)

    Curran, Paul J.; Dungan, Jennifer L.; Gholz, Henry L.

    1990-01-01

    The leaf area index (LAI, total area of leaves per unit area of ground) of most forest canopies varies throughout the year, yet for logistical reasons it is difficult to estimate anything more detailed than a seasonal maximum LAI. To determine if remotely sensed data can be used to estimate LAI seasonally, field measurements of LAI were compared to normalized difference vegetation index (NDVI) values derived using LANDSAT Thematic Mapper (TM) data, for 16 fertilized and control slash pine plots on 3 dates. Linear relationships existed between NDVI and LAI with R(sup 2) values of 0.35, 0.75, and 0.86 for February 1988, September 1988, and March, 1989, respectively. This is the first reported study in which NDVI is related to forest LAI recorded during the month of sensor overpass. Predictive relationships based on data from eight of the plots were used to estimate the LAI of the other eight plots with a root-mean-square error of 0.74 LAI, which is 15.6 percent of the mean LAI. This demonstrates the potential use of LANDSAT TM data for studying seasonal dynamics in forest canopies.

  8. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  9. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  10. Field Flight Dynamics of Hummingbirds during Territory Encroachment and Defense

    PubMed Central

    Sholtis, Katherine M.; Shelton, Ryan M.; Hedrick, Tyson L.

    2015-01-01

    Hummingbirds are known to defend food resources such as nectar sources from encroachment by competitors (including conspecifics). These competitive intraspecific interactions provide an opportunity to quantify the biomechanics of hummingbird flight performance during ecologically relevant natural behavior. We recorded the three-dimensional flight trajectories of Ruby-throated Hummingbirds defending, being chased from and freely departing from a feeder. These trajectories allowed us to compare natural flight performance to earlier laboratory measurements of maximum flight speed, aerodynamic force generation and power estimates. During field observation, hummingbirds rarely approached the maximal flight speeds previously reported from wind tunnel tests and never did so during level flight. However, the accelerations and rates of change in kinetic and potential energy we recorded indicate that these hummingbirds likely operated near the maximum of their flight force and metabolic power capabilities during these competitive interactions. Furthermore, although birds departing from the feeder while chased did so faster than freely-departing birds, these speed gains were accomplished by modulating kinetic and potential energy gains (or losses) rather than increasing overall power output, essentially trading altitude for speed during their evasive maneuver. Finally, the trajectories of defending birds were directed toward the position of the encroaching bird rather than the feeder. PMID:26039101

  11. Lethal effect of electric fields on isolated ventricular myocytes.

    PubMed

    de Oliveira, Pedro Xavier; Bassani, Rosana Almada; Bassani, José Wilson Magalhães

    2008-11-01

    Defibrillator-type shocks may cause electric and contractile dysfunction. In this study, we determined the relationship between probability of lethal injury and electric field intensity (E in isolated rat ventricular myocytes, with emphasis on field orientation and stimulus waveform. This relationship was sigmoidal with irreversible injury for E > 50 V/cm . During both threshold and lethal stimulation, cells were twofold more sensitive to the field when it was applied longitudinally (versus transversally) to the cell major axis. For a given E, the estimated maximum variation of transmembrane potential (Delta V(max)) was greater for longitudinal stimuli, which might account for the greater sensitivity to the field. Cell death, however, occurred at lower maximum Delta V(max) values for transversal shocks. This might be explained by a less steep spatial decay of transmembrane potential predicted for transversal stimulation, which would possibly result in occurrence of electroporation in a larger membrane area. For the same stimulus duration, cells were less sensitive to field-induced injury when shocks were biphasic (versus monophasic). Ours results indicate that, although significant myocyte death may occur in the E range expected during clinical defibrillation, biphasic shocks are less likely to produce irreversible cell injury.

  12. Field Flight Dynamics of Hummingbirds during Territory Encroachment and Defense.

    PubMed

    Sholtis, Katherine M; Shelton, Ryan M; Hedrick, Tyson L

    2015-01-01

    Hummingbirds are known to defend food resources such as nectar sources from encroachment by competitors (including conspecifics). These competitive intraspecific interactions provide an opportunity to quantify the biomechanics of hummingbird flight performance during ecologically relevant natural behavior. We recorded the three-dimensional flight trajectories of Ruby-throated Hummingbirds defending, being chased from and freely departing from a feeder. These trajectories allowed us to compare natural flight performance to earlier laboratory measurements of maximum flight speed, aerodynamic force generation and power estimates. During field observation, hummingbirds rarely approached the maximal flight speeds previously reported from wind tunnel tests and never did so during level flight. However, the accelerations and rates of change in kinetic and potential energy we recorded indicate that these hummingbirds likely operated near the maximum of their flight force and metabolic power capabilities during these competitive interactions. Furthermore, although birds departing from the feeder while chased did so faster than freely-departing birds, these speed gains were accomplished by modulating kinetic and potential energy gains (or losses) rather than increasing overall power output, essentially trading altitude for speed during their evasive maneuver. Finally, the trajectories of defending birds were directed toward the position of the encroaching bird rather than the feeder.

  13. National and State Treatment Need and Capacity for Opioid Agonist Medication-Assisted Treatment

    PubMed Central

    Campopiano, Melinda; Baldwin, Grant; McCance-Katz, Elinore

    2015-01-01

    Objectives. We estimated national and state trends in opioid agonist medication-assisted treatment (OA-MAT) need and capacity to identify gaps and inform policy decisions. Methods. We generated national and state rates of past-year opioid abuse or dependence, maximum potential buprenorphine treatment capacity, number of patients receiving methadone from opioid treatment programs (OTPs), and the percentage of OTPs operating at 80% capacity or more using Substance Abuse and Mental Health Services Administration data. Results. Nationally, in 2012, the rate of opioid abuse or dependence was 891.8 per 100 000 people aged 12 years or older compared with national rates of maximum potential buprenorphine treatment capacity and patients receiving methadone in OTPs of, respectively, 420.3 and 119.9. Among states and the District of Columbia, 96% had opioid abuse or dependence rates higher than their buprenorphine treatment capacity rates; 37% had a gap of at least 5 per 1000 people. Thirty-eight states (77.6%) reported at least 75% of their OTPs were operating at 80% capacity or more. Conclusions. Significant gaps between treatment need and capacity exist at the state and national levels. Strategies to increase the number of OA-MAT providers are needed. PMID:26066931

  14. Quantifying gas emissions from the 946 CE Millennium Eruption of Paektu volcano, Democratic People's Republic of Korea/China

    USGS Publications Warehouse

    Iacovino, Kayla; Ju-Song, Kim; Sisson, Thomas W.; Lowenstern, Jacob B.; Ku-Hun, Ri; Jong-Nam, Jang; Kun-Ho, Song; Song-Hwan, Ham; Clive Oppenheimer,; James O.S. Hammond,; Amy Donovan,; Kosima Weber-Liu,; Kum-Ran , Ryu

    2016-01-01

    Paektu volcano (Changbaishan) is a rhyolitic caldera that straddles the border between the Democratic People's Republic of Korea (DPRK) and China. Its most recent large eruption was the Millennium Eruption (ME; 23 km3 DRE) circa 946 CE, which resulted in the release of copious magmatic volatiles (H2O, CO2, sulfur, and halogens). Accurate quantification of volatile yield and composition is critical in assessing volcanogenic climate impacts but is elusive, particularly for pre-historic or unmonitored eruptions. Here we employ a geochemical technique to quantify volatile composition and yield from the ME by examining trends in incompatible trace and volatile element concentrations in crystal-hosted melt inclusions. We estimate a maximum of 45 Tg S was injected into the stratosphere during the ME. If true yields are close to this maximum, this equates to more than 1.5 times the S released during the 1815 eruption of Tambora, which contributed to the "Year Without a Summer". Our maximum gas yield estimates place the ME among the strongest emitters of climate forcing gases in recorded human history in stark contrast to ice core records that indicate minimal atmospheric sulfate loading after the eruption. We conclude that the potential lack of strong climate forcing occurred in spite of the substantial S yield and suggest that other factors predominated in minimizing climatic effects. This paradoxical case in which high S emissions do not result in substantial climate forcing may present a way forward in building more generalized models for predicting which volcanic eruptions will produce large climate impacts.

  15. Computational wear assessment of hard on hard hip implants subject to physically demanding tasks.

    PubMed

    Nithyaprakash, R; Shankar, S; Uddin, M S

    2018-05-01

    Hip implants subject to gait loading due to occupational activities are potentially prone to failures such as osteolysis and aseptic loosening, causing painful revision surgeries. Highly risky gait activities such as carrying a load, stairs up or down and ladder up or down may cause excessive loading at the hip joint, resulting in generation of wear and related debris. Estimation of wear under the above gait activities is thus crucial to design and develop a new and improved implant component. With this motivation, this paper presents an assessment of wear generation of PCD-on-PCD (poly crystalline diamond) hip implants using finite element (FE) analysis. Three-dimensional (3D) FE model of hip implant along with peak gait and peak flexion angle for each activity was used to estimate wear of PCD for 10 million cycles. The maximum and minimum initial contact pressures of 206.19 MPa and 151.89 MPa were obtained for carrying load of 40 kg and sitting down or getting up activity. The simulation results obtained from finite element model also revealed that the maximum linear wear of 0.585 μm occurred for the patients frequently involved in sitting down or getting up gait activity and maximum volumetric wear of 0.025 mm 3 for ladder up gait activity. The stair down activity showed the least linear and volumetric wear of 0.158 μm and 0.008 mm 3 , respectively, at the end of 10 million cycles. Graphical abstract Computational wear assessment of hip implants subjected to physically demanding tasks.

  16. Energy potential of the modified excess sludge

    NASA Astrophysics Data System (ADS)

    Zawieja, Iwona

    2017-11-01

    On the basis of the SCOD value of excess sludge it is possible to estimate an amount of energy potentially obtained during the methane fermentation process. Based on a literature review, it has been estimated that from 1 kg of SCOD it is possible to obtain 3.48 kWh of energy. Taking into account the above methane and energy ratio (i.e. 10 kWh/1Nm3 CH4), it is possible to determine the volume of methane obtained from the tested sludge. Determination of potential energy of sludge is necessary for the use of biogas as a source of power generators as cogeneration and ensure the stability of this type of system. Therefore, the aim of the study was to determine the energy potential of excess sludge subjected to the thermal and chemical disintegration. In the case of thermal disintegration, test was conducted in the low temperature 80°C. The reagent used for the chemical modification was a peracetic acid, which in an aqueous medium having strong oxidizing properties. The time of chemical modification was 6 hours. Applied dose of the reagent was 1.0 ml CH3COOOH/L of sludge. By subjecting the sludge disintegration by the test methods achieved an increase in the SCOD value of modified sludge, indicating the improvement of biodegradability along with a concomitant increase in their energy potential. The obtained experimental production of biogas from disintegrated sludge confirmed that it is possible to estimate potential intensity of its production. The SCOD value of 2576 mg O2/L, in the case of chemical disintegration, was obtained for a dose of 1.0 ml CH3COOH/L. For this dose the pH value was equal 6.85. In the case of thermal disintegration maximum SCOD value was 2246 mg O2/L obtained at 80°C and the time of preparation 6 h. It was estimated that in case of thermal disintegration as well as for the chemical disintegration for selected parameters, the potential energy for model digester of active volume of 5L was, respectively, 0.193 and 0,118 kWh.

  17. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  18. Methodology and implications of maximum paleodischarge estimates for mountain channels, upper Animas River basin, Colorado, U.S.A.

    USGS Publications Warehouse

    Pruess, J.; Wohl, E.E.; Jarrett, R.D.

    1998-01-01

    Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s-1 km-2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent.

  19. A Macroecological Analysis of SERA Derived Forest Heights and Implications for Forest Volume Remote Sensing

    PubMed Central

    Brolly, Matthew; Woodhouse, Iain H.; Niklas, Karl J.; Hammond, Sean T.

    2012-01-01

    Individual trees have been shown to exhibit strong relationships between DBH, height and volume. Often such studies are cited as justification for forest volume or standing biomass estimation through remote sensing. With resolution of common satellite remote sensing systems generally too low to resolve individuals, and a need for larger coverage, these systems rely on descriptive heights, which account for tree collections in forests. For remote sensing and allometric applications, this height is not entirely understood in terms of its location. Here, a forest growth model (SERA) analyzes forest canopy height relationships with forest wood volume. Maximum height, mean, H100, and Lorey's height are examined for variability under plant number density, resource and species. Our findings, shown to be allometrically consistent with empirical measurements for forested communities world-wide, are analyzed for implications to forest remote sensing techniques such as LiDAR and RADAR. Traditional forestry measures of maximum height, and to a lesser extent H100 and Lorey's, exhibit little consistent correlation with forest volume across modeled conditions. The implication is that using forest height to infer volume or biomass from remote sensing requires species and community behavioral information to infer accurate estimates using height alone. SERA predicts mean height to provide the most consistent relationship with volume of the height classifications studied and overall across forest variations. This prediction agrees with empirical data collected from conifer and angiosperm forests with plant densities ranging between 102–106 plants/hectare and heights 6–49 m. Height classifications investigated are potentially linked to radar scattering centers with implications for allometry. These findings may be used to advance forest biomass estimation accuracy through remote sensing. Furthermore, Lorey's height with its specific relationship to remote sensing physics is recommended as a more universal indicator of volume when using remote sensing than achieved using either maximum height or H100. PMID:22457800

  20. A macroecological analysis of SERA derived forest heights and implications for forest volume remote sensing.

    PubMed

    Brolly, Matthew; Woodhouse, Iain H; Niklas, Karl J; Hammond, Sean T

    2012-01-01

    Individual trees have been shown to exhibit strong relationships between DBH, height and volume. Often such studies are cited as justification for forest volume or standing biomass estimation through remote sensing. With resolution of common satellite remote sensing systems generally too low to resolve individuals, and a need for larger coverage, these systems rely on descriptive heights, which account for tree collections in forests. For remote sensing and allometric applications, this height is not entirely understood in terms of its location. Here, a forest growth model (SERA) analyzes forest canopy height relationships with forest wood volume. Maximum height, mean, H₁₀₀, and Lorey's height are examined for variability under plant number density, resource and species. Our findings, shown to be allometrically consistent with empirical measurements for forested communities world-wide, are analyzed for implications to forest remote sensing techniques such as LiDAR and RADAR. Traditional forestry measures of maximum height, and to a lesser extent H₁₀₀ and Lorey's, exhibit little consistent correlation with forest volume across modeled conditions. The implication is that using forest height to infer volume or biomass from remote sensing requires species and community behavioral information to infer accurate estimates using height alone. SERA predicts mean height to provide the most consistent relationship with volume of the height classifications studied and overall across forest variations. This prediction agrees with empirical data collected from conifer and angiosperm forests with plant densities ranging between 10²-10⁶ plants/hectare and heights 6-49 m. Height classifications investigated are potentially linked to radar scattering centers with implications for allometry. These findings may be used to advance forest biomass estimation accuracy through remote sensing. Furthermore, Lorey's height with its specific relationship to remote sensing physics is recommended as a more universal indicator of volume when using remote sensing than achieved using either maximum height or H₁₀₀.

  1. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  2. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  3. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  4. Lahar hazard zones for eruption-generated lahars in the Lassen Volcanic Center, California

    USGS Publications Warehouse

    Robinson, Joel E.; Clynne, Michael A.

    2012-01-01

    Lahar deposits are found in drainages that head on or near Lassen Peak in northern California, demonstrating that these valleys are susceptible to future lahars. In general, lahars are uncommon in the Lassen region. Lassen Peak's lack of large perennial snowfields and glaciers limits its potential for lahar development, with the winter snowpack being the largest source of water for lahar generation. The most extensive lahar deposits are related to the May 1915 eruption of Lassen Peak, and evidence for pre-1915 lahars is sparse and spatially limited. The May 1915 eruption of Lassen Peak was a small-volume eruption that generated a snow and hot-rock avalanche, a pyroclastic flow, and two large and four smaller lahars. The two large lahars were generated on May 19 and 22 and inundated sections of Lost and Hat Creeks. We use 80 years of snow depth measurements from Lassen Peak to calculate average and maximum liquid water depths, 2.02 meters (m) and 3.90 m respectively, for the month of May as estimates of the 1915 lahars. These depths are multiplied by the areal extents of the eruptive deposits to calculate a water volume range, 7.05-13.6x106 cubic meters (m3). We assume the lahars were a 50/50 mix of water and sediment and double the water volumes to provide an estimate of the 1915 lahars, 13.2-19.8x106 m3. We use a representative volume of 15x106 m3 in the software program LAHARZ to calculate cross-sectional and planimetric areas for the 1915 lahars. The resultant lahar inundation zone reasonably portrays both of the May 1915 lahars. We use this same technique to calculate the potential for future lahars in basins that head on or near Lassen Peak. LAHARZ assumes that the total lahar volume does not change after leaving the potential energy, H/L, cone (the height of the edifice, H, down to the approximate break in slope at its base, L); therefore, all water available to initiate a lahar is contained inside this cone. Because snow is the primary source of water for lahar generation, we assume that the maximum historical water equivalent, 3.90 m, covers the entire basin area inside the H/L cone. The product of planimetric area of each basin inside the H/L and the maximum historical water equivalent yields the maximum water volume available to generate a lahar. We then double the water volumes to approximate maximum lahar volumes. The maximum lahar volumes and an understanding of the statistical uncertainties inherent to the LAHARZ calculations guided our selection of six hypothetical volumes, 1, 3, 10, 30, 60, and 90x106 m3, to delineate concentric lahar inundation zones. The lahar inundation zones extend, in general, tens of kilometers away from Lassen Peak. The small, more-frequent lahar inundation zones (1 and 3x106 m3) are, on average, 10 km long. The exceptions are the zones in Warner Creek and Mill Creek, which extend much further. All but one of the small, more-frequent lahar inundation zones reach outside of the Lassen Volcanic National Park boundary, and the zone in Mill Creek extends well past the park boundary. All of the medium, moderately frequent lahar inundation zones (10 and 30x106 m3) extend past the park boundary and could potentially impact the communities of Viola and Old Station and State Highways 36 and 44, both north and west of Lassen Peak. The approximately 27-km-long on average, large, less-frequent lahar inundation zones (60 and 90x106 m3) represent worst-case lahar scenarios that are unlikely to occur. Flood hazards continue downstream from the toes of the lahars, potentially affecting communities in the Sacramento River Valley.

  5. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation

    PubMed Central

    Herrera, Ronald; Berger, Ursula; von Ehrenstein, Ondine S.; Díaz, Iván; Huber, Stella; Moraga Muñoz, Daniel; Radon, Katja

    2017-01-01

    In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children’s respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69%) children living in the community. The proximity of the children’s home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR) for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: −4.7; 95% confidence interval (95% CI): −8.4; −0.11); and 4.2 percentage points (CAR: −4.2; 95% CI: −7.9;−0.05) for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies. PMID:29280971

  7. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation.

    PubMed

    Herrera, Ronald; Berger, Ursula; von Ehrenstein, Ondine S; Díaz, Iván; Huber, Stella; Moraga Muñoz, Daniel; Radon, Katja

    2017-12-27

    In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children's respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % ) children living in the community. The proximity of the children's home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR) for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: - 4.7 ; 95 % confidence interval ( 95 % CI): - 8.4 ; - 0.11 ); and 4.2 percentage points (CAR: - 4.2 ; 95 % CI: - 7.9 ; - 0.05 ) for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  8. Weather radar data correlate to hail-induced mortality in grassland birds

    USGS Publications Warehouse

    Carver, Amber; Ross, Jeremy D.; Augustine, David J.; Skagen, Susan K.; Dwyer, Angela M.; Tomback, Diana F.; Wunder, Michael B.

    2017-01-01

    Small-bodied terrestrial animals such as songbirds (Order Passeriformes) are especially vulnerable to hail-induced mortality; yet, hail events are challenging to predict, and they often occur in locations where populations are not being studied. Focusing on nesting grassland songbirds, we demonstrate a novel approach to estimate hail-induced mortality. We quantify the relationship between the probability of nests destroyed by hail and measured Level-III Next Generation Radar (NEXRAD) data, including atmospheric base reflectivity, maximum estimated size of hail and maximum estimated azimuthal wind shear. On 22 June 2014, a hailstorm in northern Colorado destroyed 102 out of 203 known nests within our research site. Lark bunting (Calamospiza melanocorys) nests comprised most of the sample (n = 186). Destroyed nests were more likely to be found in areas of higher storm intensity, and distributions of NEXRAD variables differed between failed and surviving nests. For 133 ground nests where nest-site vegetation was measured, we examined the ameliorative influence of woody vegetation, nest cover and vegetation density by comparing results for 13 different logistic regression models incorporating the independent and additive effects of weather and vegetation variables. The most parsimonious model used only the interactive effect of hail size and wind shear to predict the probability of nest survival, and the data provided no support for any of the models without this predictor. We conclude that vegetation structure may not mitigate mortality from severe hailstorms and that weather radar products can be used remotely to estimate potential for hail mortality of nesting grassland birds. These insights will improve the efficacy of grassland bird population models under predicted climate change scenarios.

  9. Modeling spatial-temporal dynamics of global wetlands: comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zimmermann, N. E.; Poulter, B.

    2015-11-01

    Simulations of the spatial-temporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl dynamic global vegetation model (DGVM), and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland dataset can help to successfully delineate the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ∼ 10.3 Mkm2 (106 km2), with a mean annual maximum of ∼ 5.17 Mkm2 for 1980-2010. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.

  10. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less

  11. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, C.; Hanany, S.; Baccigalupi, C.

    We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMBmore » B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.« less

  13. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  14. Using multi-locus allelic sequence data to estimate genetic divergence among four Lilium (Liliaceae) cultivars

    PubMed Central

    Shahin, Arwa; Smulders, Marinus J. M.; van Tuyl, Jaap M.; Arens, Paul; Bakker, Freek T.

    2014-01-01

    Next Generation Sequencing (NGS) may enable estimating relationships among genotypes using allelic variation of multiple nuclear genes simultaneously. We explored the potential and caveats of this strategy in four genetically distant Lilium cultivars to estimate their genetic divergence from transcriptome sequences using three approaches: POFAD (Phylogeny of Organisms from Allelic Data, uses allelic information of sequence data), RAxML (Randomized Accelerated Maximum Likelihood, tree building based on concatenated consensus sequences) and Consensus Network (constructing a network summarizing among gene tree conflicts). Twenty six gene contigs were chosen based on the presence of orthologous sequences in all cultivars, seven of which also had an orthologous sequence in Tulipa, used as out-group. The three approaches generated the same topology. Although the resolution offered by these approaches is high, in this case there was no extra benefit in using allelic information. We conclude that these 26 genes can be widely applied to construct a species tree for the genus Lilium. PMID:25368628

  15. A strategy to facilitate cleanup at the Mare Island Naval Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, J.; Albert, D.

    1995-12-31

    A strategy based on an early realistic estimation of ecological risk was devised to facilitate cleanup of installation restoration units at the Mare Island Naval Station. The strategy uses the results of 100 years of soil-plant studies, which centered on maximizing the bioavailability of nutrients for crop growth. The screening strategy classifies sites according to whether they present (1) little or no ecological risk and require no further action, (2) an immediate and significant risk, and (3) an ecological risk that requires further quantification. The strategy assumes that the main focus of screening level risk assessment is quantification of themore » potential for abiotic-to-biotic transfer (bioavailability) of contaminants, especially at lower trophic levels where exposure is likely to be at a maximum. Sediment screening criteria developed by the California Environmental Protection Agency is used as one regulatory endpoint for evaluating total chemical concentrations. A realistic estimation of risk is then determined by estimating the bioavailability of contaminants.« less

  16. Assessing the Variability of Heavy Metal Concentrations in Liquid-Solid Two-Phase and Related Environmental Risks in the Weihe River of Shaanxi Province, China

    PubMed Central

    Song, Jinxi; Yang, Xiaogang; Zhang, Junlong; Long, Yongqing; Zhang, Yan; Zhang, Taifan

    2015-01-01

    Accurate estimation of the variability of heavy metals in river water and the hyporheic zone is crucial for pollution control and environmental management. The biotoxicities and potential ecological risks of heavy metals (Cu, Zn, Pb, Cd) in a solid-liquid two-phase system were estimated using the Geo-accumulation Index, Potential Ecological Risk Assessment and Quality Standard Index methods in the Weihe River of Shaanxi Province, China. Water and sediment samples were collected from five study sites during spring, summer and winter, 2013. The dominant species in the streambed sediments were chironomids and flutter earthworm, whose bioturbation mainly ranged from 0 to 20 cm. The concentrations of heavy metals in surface water and pore water varied obviously in spring and summer. The degrees of concentration of Cu and Cd in spring and summer were higher than the U.S. water quality Criteria Maximum Concentrations. Furthermore, the biotoxicities of Pb and Zn demonstrated season-spatial variations. The concentrations of Cu, Zn, Pb and Cd in spring and winter were significantly higher than those in summer, and the pollution levels also varied obviously in different layers of the sediments. Moreover, the pollution level of Cd was the most serious, as estimated by all three assessment methods. PMID:26193293

  17. Cancer risk from incidental ingestion exposures to PAHs associated with coal-tar-sealed pavement

    USGS Publications Warehouse

    Williams, E. Spencer; Mahler, Barbara J.; Van Metre, Peter C.

    2012-01-01

    Recent (2009-10) studies documented significantly higher concentrations of polycyclic aromatic hydrocarbons (PAHs) in settled house dust in living spaces and soil adjacent to parking lots sealed with coal-tar-based products. To date, no studies have examined the potential human health effects of PAHs from these products in dust and soil. Here we present the results of an analysis of potential cancer risk associated with incidental ingestion exposures to PAHs in settings near coal-tar-sealed pavement. Exposures to benzo[a]pyrene equivalents were characterized across five scenarios. The central tendency estimate of excess cancer risk resulting from lifetime exposures to soil and dust from nondietary ingestion in these settings exceeded 1 × 10–4, as determined using deterministic and probabilistic methods. Soil was the primary driver of risk, but according to probabilistic calculations, reasonable maximum exposure to affected house dust in the first 6 years of life was sufficient to generate an estimated excess lifetime cancer risk of 6 × 10–5. Our results indicate that the presence of coal-tar-based pavement sealants is associated with significant increases in estimated excess lifetime cancer risk for nearby residents. Much of this calculated excess risk arises from exposures to PAHs in early childhood (i.e., 0–6 years of age).

  18. Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran

    NASA Astrophysics Data System (ADS)

    Ney, B.; Askari, M.

    2009-04-01

    Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.

  19. Phylogenetic estimation and morphological evolution of Arundinarieae (Bambusoideae: Poaceae) based on plastome phylogenomic analysis.

    PubMed

    Attigala, Lakshmi; Wysocki, William P; Duvall, Melvin R; Clark, Lynn G

    2016-08-01

    We explored phylogenetic relationships among the twelve lineages of the temperate woody bamboo clade (tribe Arundinarieae) based on plastid genome (plastome) sequence data. A representative sample of 28 taxa was used and maximum parsimony, maximum likelihood and Bayesian inference analyses were conducted to estimate the Arundinarieae phylogeny. All the previously recognized clades of Arundinarieae were supported, with Ampelocalamus calcareus (Clade XI) as sister to the rest of the temperate woody bamboos. Well supported sister relationships between Bergbambos tessellata (Clade I) and Thamnocalamus spathiflorus (Clade VII) and between Kuruna (Clade XII) and Chimonocalmus (Clade III) were revealed by the current study. The plastome topology was tested by taxon removal experiments and alternative hypothesis testing and the results supported the current plastome phylogeny as robust. Neighbor-net analyses showed few phylogenetic signal conflicts, but suggested some potentially complex relationships among these taxa. Analyses of morphological character evolution of rhizomes and reproductive structures revealed that pachymorph rhizomes were most likely the ancestral state in Arundinarieae. In contrast leptomorph rhizomes either evolved once with reversions to the pachymorph condition or multiple times in Arundinarieae. Further, pseudospikelets evolved independently at least twice in the Arundinarieae, but the ancestral state is ambiguous. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Dynamic Propagation Channel Characterization and Modeling for Human Body Communication

    PubMed Central

    Nie, Zedong; Ma, Jingjing; Li, Zhicheng; Chen, Hong; Wang, Lei

    2012-01-01

    This paper presents the first characterization and modeling of dynamic propagation channels for human body communication (HBC). In-situ experiments were performed using customized transceivers in an anechoic chamber. Three HBC propagation channels, i.e., from right leg to left leg, from right hand to left hand and from right hand to left leg, were investigated under thirty-three motion scenarios. Snapshots of data (2,800,000) were acquired from five volunteers. Various path gains caused by different locations and movements were quantified and the statistical distributions were estimated. In general, for a given reference threshold è = −10 dB, the maximum average level crossing rate of the HBC was approximately 1.99 Hz, the maximum average fade time was 59.4 ms, and the percentage of bad channel duration time was less than 4.16%. The HBC exhibited a fade depth of −4 dB at 90% complementary cumulative probability. The statistical parameters were observed to be centered for each propagation channel. Subsequently a Fritchman model was implemented to estimate the burst characteristics of the on-body fading. It was concluded that the HBC is motion-insensitive, which is sufficient for reliable communication link during motions, and therefore it has great potential for body sensor/area networks. PMID:23250278

  1. Dynamic propagation channel characterization and modeling for human body communication.

    PubMed

    Nie, Zedong; Ma, Jingjing; Li, Zhicheng; Chen, Hong; Wang, Lei

    2012-12-18

    This paper presents the first characterization and modeling of dynamic propagation channels for human body communication (HBC). In-situ experiments were performed using customized transceivers in an anechoic chamber. Three HBC propagation channels, i.e., from right leg to left leg, from right hand to left hand and from right hand to left leg, were investigated under thirty-three motion scenarios. Snapshots of data (2,800,000) were acquired from five volunteers. Various path gains caused by different locations and movements were quantified and the statistical distributions were estimated. In general, for a given reference threshold è = -10 dB, the maximum average level crossing rate of the HBC was approximately 1.99 Hz, the maximum average fade time was 59.4 ms, and the percentage of bad channel duration time was less than 4.16%. The HBC exhibited a fade depth of -4 dB at 90% complementary cumulative probability. The statistical parameters were observed to be centered for each propagation channel. Subsequently a Fritchman model was implemented to estimate the burst characteristics of the on-body fading. It was concluded that the HBC is motion-insensitive, which is sufficient for reliable communication link during motions, and therefore it has great potential for body sensor/area networks.

  2. Potential impact of predicted sea level rise on carbon sink function of mangrove ecosystems with special reference to Negombo estuary, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Perera, K. A. R. S.; De Silva, K. H. W. L.; Amarasinghe, M. D.

    2018-02-01

    Unique location in the land-sea interface makes mangrove ecosystems most vulnerable to the impacts of predicted sea level rise due to increasing anthropogenic CO2 emissions. Among others, carbon sink function of these tropical ecosystems that contribute to reduce rising atmospheric CO2 and temperature, could potentially be affected most. Present study was undertaken to explore the extent of impact of the predicted sea level rise for the region on total organic carbon (TOC) pools of the mangrove ecosystems in Negombo estuary located on the west coast of Sri Lanka. Extents of the coastal inundations under minimum (0.09 m) and maximum (0.88 m) sea level rise scenarios of IPCC for 2100 and an intermediate level of 0.48 m were determined with GIS tools. Estimated total capacity of organic carbon retention by these mangrove areas was 499.45 Mg C ha- 1 of which 84% (418.98 Mg C ha- 1) sequestered in the mangrove soil and 16% (80.56 Mg C ha- 1) in the vegetation. Total extent of land area potentially affected by inundation under lowest sea level rise scenario was 218.9 ha, while it was 476.2 ha under intermediate rise and 696.0 ha with the predicted maximum sea level rise. Estimated rate of loss of carbon sink function due to inundation by the sea level rise of 0.09 m is 6.30 Mg C ha- 1 y- 1 while the intermediate sea level rise indicated a loss of 9.92 Mg C ha- 1 y- 1 and under maximum sea level rise scenario, this loss further increases up to 11.32 Mg C ha- 1 y- 1. Adaptation of mangrove plants to withstand inundation and landward migration along with escalated photosynthetic rates, augmented by changing rainfall patterns and availability of nutrients may contribute to reduce the rate of loss of carbon sink function of these mangrove ecosystems. Predictions over change in carbon sequestration function of mangroves in Negombo estuary reveals that it is not only affected by oceanographic and hydrological alterations associated with sea level rise but also by anthropogenic processes, therefore the impacts are site specific in terms of distribution and magnitude.

  3. Transportation Energy Futures Series: Effects of Travel Reduction and Efficient Driving on Transportation: Energy Use and Greenhouse Gas Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, C. D.; Brown, A.; DeFlorio, J.

    2013-03-01

    Since the 1970s, numerous transportation strategies have been formulated to change the behavior of drivers or travelers by reducing trips, shifting travel to more efficient modes, or improving the efficiency of existing modes. This report summarizes findings documented in existing literature to identify strategies with the greatest potential impact. The estimated effects of implementing the most significant and aggressive individual driver behavior modification strategies range from less than 1% to a few percent reduction in transportation energy use and GHG emissions. Combined strategies result in reductions of 7% to 15% by 2030. Pricing, ridesharing, eco-driving, and speed limit reduction/enforcement strategiesmore » are widely judged to have the greatest estimated potential effect, but lack the widespread public acceptance needed to accomplish maximum results. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.« less

  4. Transportation Energy Futures Series. Effects of Travel Reduction and Efficient Driving on Transportation. Energy Use and Greenhouse Gas Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, C. D.; Brown, A.; DeFlorio, J.

    2013-03-01

    Since the 1970s, numerous transportation strategies have been formulated to change the behavior of drivers or travelers by reducing trips, shifting travel to more efficient modes, or improving the efficiency of existing modes. This report summarizes findings documented in existing literature to identify strategies with the greatest potential impact. The estimated effects of implementing the most significant and aggressive individual driver behavior modification strategies range from less than 1% to a few percent reduction in transportation energy use and GHG emissions. Combined strategies result in reductions of 7% to 15% by 2030. Pricing, ridesharing, eco-driving, and speed limit reduction/enforcement strategiesmore » are widely judged to have the greatest estimated potential effect, but lack the widespread public acceptance needed to accomplish maximum results. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.« less

  5. Hurricane Properties for KSC and Mid-Florida Coastal Sites

    NASA Technical Reports Server (NTRS)

    Johnson, Dale L.; Rawlins, Michael A.; Kross, Dennis A.

    2000-01-01

    Hurricane information and climatologies are needed at Kennedy Space Center (KSC) Florida for launch operational planning purposes during the late summer and early fall Atlantic hurricane season. Also these results are needed to be used in estimating the potential magnitudes of hurricane and tropical storm impact on coastal Florida sites when passing within 50, 100 and 400 nm of that site. Roll-backs of the Space Shuttle and other launch vehicles, on pad, are very costly when a tropical storm approaches. A decision for the vehicle to roll-back or ride-out needs to be made. Therefore the historical Atlantic basin hurricane climatological properties were generated to be used for operational planning purposes and in the estimation of potential damage to launch vehicles, supporting equipment, buildings, etc.. The historical 1885-1998 Atlantic basin hurricane data were compiled and analyzed with respect to the coastal Florida site of KSC. Statistical information generated includes hurricane and tropical storm probabilities for path, maximum wind, and lowest pressure, presented for the areas within 50, 100 and 400 nm of KSC. These statistics are then compared to similar parametric statistics for the entire Atlantic basin.

  6. Assessment of elimination profile of albendazole residues in fish.

    PubMed

    Busatto, Zenaís; de França, Welliton Gonçalves; Cyrino, José Eurico Possebon; Paschoal, Jonas Augusto Rizzato

    2018-01-01

    Few drugs are specifically regulated for aquaculture. Thus this study considered albendazole (ABZ) as a potential drug for use in fish, which, however, is not yet regulated for this application. ABZ is a broad-spectrum anthelmintic approved for farmed ruminants and recently considered for treatment of fish parasites. It is the subject of careful monitoring because of potential residues in animal products. This study evaluated the depletion of ABZ and its main known metabolites: albendazole sulfoxide - ABZSO, albendazole sulfone - ABZSO 2 and albendazole amino sulfone - ABZ-2-NH 2 SO 2 , in the fillets of the Neotropical Characin pacu, Piaractus mesopotamicus, which were fed diets containing 10 mg ABZ kg -1 body weight in a single dose. Fish were euthanised at 8, 12, 24, 48, 72, 96 and 120 hours after medication and the depletion profiles of ABZ, each metabolite and the sum of all marker residues were assessed and evaluated taking into account methodological variations regarding determination of the maximum residue limits adopted by different international regulating agencies for estimation of the withdrawal period (WP). The estimated WPs ranged from 2 to 7 days.

  7. Radar and Lidar Radar DEM

    NASA Technical Reports Server (NTRS)

    Liskovich, Diana; Simard, Marc

    2011-01-01

    Using radar and lidar data, the aim is to improve 3D rendering of terrain, including digital elevation models (DEM) and estimates of vegetation height and biomass in a variety of forest types and terrains. The 3D mapping of vegetation structure and the analysis are useful to determine the role of forest in climate change (carbon cycle), in providing habitat and as a provider of socio-economic services. This in turn will lead to potential for development of more effective land-use management. The first part of the project was to characterize the Shuttle Radar Topography Mission DEM error with respect to ICESat/GLAS point estimates of elevation. We investigated potential trends with latitude, canopy height, signal to noise ratio (SNR), number of LiDAR waveform peaks, and maximum peak width. Scatter plots were produced for each variable and were fitted with 1st and 2nd degree polynomials. Higher order trends were visually inspected through filtering with a mean and median filter. We also assessed trends in the DEM error variance. Finally, a map showing how DEM error was geographically distributed globally was created.

  8. Development and application of the maximum entropy method and other spectral estimation techniques

    NASA Astrophysics Data System (ADS)

    King, W. R.

    1980-09-01

    This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.

  9. Hopping and band mobilities of pentacene, rubrene, and 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) from first principle calculations.

    PubMed

    Kobayashi, Hajime; Kobayashi, Norihito; Hosoi, Shizuka; Koshitani, Naoki; Murakami, Daisuke; Shirasawa, Raku; Kudo, Yoshihiro; Hobara, Daisuke; Tokita, Yuichi; Itabashi, Masao

    2013-07-07

    Hopping and band mobilities of holes in organic semiconductors at room temperature were estimated from first principle calculations. Relaxation times of charge carriers were evaluated using the acoustic deformation potential model. It is found that van der Waals interactions play an important role in determining accurate relaxation times. The hopping mobilities of pentacene, rubrene, and 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) in bulk single crystalline structures were found to be smaller than 4 cm(2)∕Vs, whereas the band mobilities were estimated between 36 and 58 cm(2)∕Vs, which are close to the maximum reported experimental values. This strongly suggests that band conductivity is dominant in these materials even at room temperature.

  10. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2013-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  11. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2011-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  12. A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals

    NASA Technical Reports Server (NTRS)

    Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.

    1988-01-01

    This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.

  13. On the Mechanism for a Gravity Effect Using Type 2 Superconductors

    NASA Technical Reports Server (NTRS)

    Robertson, Glen A.

    1999-01-01

    In this paper, we formulate a percent mass change equation based on Woodward's transient mass shift and the Cavendish balance equations applied to superconductor Josephson junctions, A correction to the transient mass shift equation is presented due to the emission of the mass energy from the superconductor. The percentage of mass change predicted by the equation was estimated against the maximum percent mass change reported by Podkletnov in his gravity shielding experiments. An experiment is then discussed, which could shed light on the transient mass shift near superconductor and verify the corrected gravitational potential.

  14. Evolution of the climatic tolerance and postglacial range changes of the most primitive orchids (Apostasioideae) within Sundaland, Wallacea and Sahul

    PubMed Central

    Mystkowska, Katarzyna; Kras, Marta; Dudek, Magdalena

    2016-01-01

    The location of possible glacial refugia of six Apostasioideae representatives is estimated based on ecological niche modeling analysis. The distribution of their suitable niches during the last glacial maximum (LGM) is compared with their current potential and documented geographical ranges. The climatic factors limiting the studied species occurrences are evaluated and the niche overlap between the studied orchids is assessed and discussed. The predicted niche occupancy profiles and reconstruction of ancestral climatic tolerances suggest high level of phylogenetic niche conservatism within Apostasioideae. PMID:27635348

  15. Sequence editing by Apolipoprotein B RNA-editing catalytic component-B and epidemiological surveillance of transmitted HIV-1 drug resistance

    PubMed Central

    Gifford, Robert J.; Rhee, Soo-Yon; Eriksson, Nicolas; Liu, Tommy F.; Kiuchi, Mark; Das, Amar K.; Shafer, Robert W.

    2008-01-01

    Design Promiscuous guanine (G) to adenine (A) substitutions catalysed by apolipoprotein B RNA-editing catalytic component (APOBEC) enzymes are observed in a proportion of HIV-1 sequences in vivo and can introduce artifacts into some genetic analyses. The potential impact of undetected lethal editing on genotypic estimation of transmitted drug resistance was assessed. Methods Classifiers of lethal, APOBEC-mediated editing were developed by analysis of lentiviral pol gene sequence variation and evaluated using control sets of HIV-1 sequences. The potential impact of sequence editing on genotypic estimation of drug resistance was assessed in sets of sequences obtained from 77 studies of 25 or more therapy-naive individuals, using mixture modelling approaches to determine the maximum likelihood classification of sequences as lethally edited as opposed to viable. Results Analysis of 6437 protease and reverse transcriptase sequences from therapy-naive individuals using a novel classifier of lethal, APOBEC3G-mediated sequence editing, the polypeptide-like 3G (APOBEC3G)-mediated defectives (A3GD) index’, detected lethal editing in association with spurious ‘transmitted drug resistance’ in nearly 3% of proviral sequences obtained from whole blood and 0.2% of samples obtained from plasma. Conclusion Screening for lethally edited sequences in datasets containing a proportion of proviral DNA, such as those likely to be obtained for epidemiological surveillance of transmitted drug resistance in the developing world, can eliminate rare but potentially significant errors in genotypic estimation of transmitted drug resistance. PMID:18356601

  16. Climatic significance of the ostracode fauna from the Pliocene Kap Kobenhavn Formation, north Greenland

    USGS Publications Warehouse

    Brouwers, E.M.; Jorgensen, N.O.; Cronin, T. M.

    1991-01-01

    The Kap Kobenhavn Formation crops out in Greenland at 80??N latitude and marks the most northerly onshore Pliocene locality known. The sands and silts that comprise the formation were deposited in marginal marine and shallow marine environments. An abundant and diverse vertebrate and invertebrate fauna and plant megafossil flora provide age and paleoclimatic constraints. The age estimated for the Kap Kobenhavn ranges from 2.0 to 3.0 million years old. Winter and summer bottom water paleotemperatures were estimated on the basis of the ostracode assemblages. The marine ostracode fauna in units B1 and B2 indicate a subfrigid to frigid marine climate, with estimated minimum sea bottom temperatures (SBT) of -2??C and estimated maximum SBT of 6-8??C. Sediments assigned to unit B2 at locality 72 contain a higher proportion of warm water genera, and the maximum SBT is estimated at 9-10??C. The marginal marine fauna in the uppermost unit B3 (locality 68) indicates a cold temperate to subfrigid marine climate, with an estimated minimum SBT of -2??C and an estimated maximum SBT ranging as high as 12-14??C. These temperatures indicated that, on the average, the Kap Kobenhavn winters in the late Pliocene were similar to or perhaps 1-2??C warmer than winters today and that summer temperatures were 7-8??C warmer than today. -from Authors

  17. Conservation Tillage on the Loess Plateau, China: Food security, Yes; Carbon sequestration, No?

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus; Hu, Yaxian; Xiao, Liangang; Greenwood, Phil; Bloemertz, Lena

    2015-04-01

    Climate change is expected to affect food security globally and increase the variability in food supply. At the same time, agricultural practices offer a great potential for mitigating and adapting to climate change. In China, food security has increased in the last decades with the number of undernourished people declining from 21% in 1990 to 12% today. However, the limited relative amount of arable land and scarce water supplies will remain a challenge. The Loess Plateau of China, located in the mid-upper reaches of the Yellow River and has an area of some 630000 km2 with a high agricultural potential. However, due to heavy summer rainstorms, steep slopes, low vegetation cover, and highly erodible soils, the Loess Plateau has become one of the most severely eroded areas in the world. Up to 70% of arable land is affected by an annual soil loss of 20-25 ton ha-1, far exceeding the threshold for sustainable use (10 ton ha-1). Rainfed farming systems are dominant on the Loess Plateau, and the farmers in this area have been exposed to a steadily increasing temperature as well as an erratic, but slightly decreasing rainfall since 1970. Therefore, adaptation of the regional agriculture is required to adapt to climate change and may be even engaged in mitigation. This study analyzed the potential contribution of conservation tillage to adaptation and mitigation of climate change on the Loess Plateau. In total, 15 papers published in English were reviewed, comparing two tillage practices, conventional tillage (CT) and conservation tillage typically represented by no-tillage (NT). Soil organic carbon (SOC) stock across soil depths as well yields and the inter-annual variations with regards to and their annual rainfall precipitation were compared for NT and CT. Our results show that: 1) The benefit of NT compared to CT in terms of increasing total SOC stocks diminishes with soil depth, questioning the use of average SOC stocks observed in topsoil to estimate the potential of NT in increasing SOC stocks to reduce net CO2 emissions. 2) In each soil layer, the total SOC stocks also declined over time. Such a decreasing trend suggests that the SOC sink was approaching its maximum capacity. This implies that the overall potential of NT in improving SOC stocks is apt to be over-estimated, if annual increases derived from short-term observation are linearly extrapolated to a long-term estimation. 3) Yields of NT increased evidently by 11.07% compared to CT. In particular, during years with precipitation <500 mm, NT yields are 18% higher than for conventional tillage. Such greater yields reduce the probability of food production falling below minimum thresholds to meet subsistence requirements, thereby increasing resilience to famine. Overall, conservation tillage (no-till) has great potential in stabilizing crop yield and thus ensuring local subsistence requirements on the China Loess Plateau. However, the potential of NT to sequestrate SOC is limited than often reported and has maximum capacity, and thus cannot be linearly extrapolated to estimate its effects on mitigating climate change.

  18. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  19. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  20. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  1. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  2. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  3. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  4. Estimations of relative effort during sit-to-stand increase when accounting for variations in maximum voluntary torque with joint angle and angular velocity.

    PubMed

    Bieryla, Kathleen A; Anderson, Dennis E; Madigan, Michael L

    2009-02-01

    The main purpose of this study was to compare three methods of determining relative effort during sit-to-stand (STS). Fourteen young (mean 19.6+/-SD 1.2 years old) and 17 older (61.7+/-5.5 years old) adults completed six STS trials at three speeds: slow, normal, and fast. Sagittal plane joint torques at the hip, knee, and ankle were calculated through inverse dynamics. Isometric and isokinetic maximum voluntary contractions (MVC) for the hip, knee, and ankle were collected and used for model parameters to predict the participant-specific maximum voluntary joint torque. Three different measures of relative effort were determined by normalizing STS joint torques to three different estimates of maximum voluntary torque. Relative effort at the hip, knee, and ankle were higher when accounting for variations in maximum voluntary torque with joint angle and angular velocity (hip=26.3+/-13.5%, knee=78.4+/-32.2%, ankle=27.9+/-14.1%) compared to methods which do not account for these variations (hip=23.5+/-11.7%, knee=51.7+/-15.0%, ankle=20.7+/-10.4%). At higher velocities, the difference in calculating relative effort with respect to isometric MVC or incorporating joint angle and angular velocity became more evident. Estimates of relative effort that account for the variations in maximum voluntary torque with joint angle and angular velocity may provide higher levels of accuracy compared to methods based on measurements of maximal isometric torques.

  5. Potential Population Consequences of Active Sonar Disturbance in Atlantic Herring: Estimating the Maximum Risk.

    PubMed

    Sivle, Lise Doksæter; Kvadsheim, Petter Helgevold; Ainslie, Michael

    2016-01-01

    Effects of noise on fish populations may be predicted by the population consequence of acoustic disturbance (PCAD) model. We have predicted the potential risk of population disturbance when the highest sound exposure level (SEL) at which adult herring do not respond to naval sonar (SEL(0)) is exceeded. When the population density is low (feeding), the risk is low even at high sonar source levels and long-duration exercises (>24 h). With densely packed populations (overwintering), a sonar exercise might expose the entire population to levels >SEL(0) within a 24-h exercise period. However, the disturbance will be short and the response threshold used here is highly conservative. It is therefore unlikely that naval sonar will significantly impact the herring population.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chicoine, T.K.; Fay, P.K.; Nielsen, G.A.

    Soil characteristics, elevation, annual precipitation, potential evapotranspiration, length of frost-free season, and mean maximum July temperature were estimated for 116 established infestations of spotted knapweed (Centaurea maculosa Lam. number/sup 3/ CENMA) in Montana using basic land resource maps. Areas potentially vulnerable to invasion by the plant were delineated on the basis of representative edaphic and climatic characteristics. No single environmental variable was an effective predictor of sites vulnerable to invasion by spotted knapweed. Only a combination of variables was effective, indicating that the factors that regulate adaptability of this plant are complex. This technique provides a first approximation map ofmore » the regions most similar environmentally to infested sites and; therefore, most vulnerable to further invasion. This weed migration prediction technique shows promise for predicting suitable habitats of other invader species. 6 references, 4 figures, 1 table.« less

  7. Structural, petrophysical and geomechanical characterization of the Becancour CO2 storage pilot site (Quebec, Canada)

    NASA Astrophysics Data System (ADS)

    Konstantinovskaya, E.; Malo, M.; Claprood, M.; Tran-Ngoc, T. D.; Gloaguen, E.; Lefebvre, R.

    2012-04-01

    The Paleozoic sedimentary succession of the St. Lawrence Platform was characterized to estimate the CO2 storage capacity, the caprock integrity and the fracture/fault stability at the Becancour pilot site. Results are based on the structural interpretation of 25 seismic lines and analysis of 11 well logs and petrophysical data. The three potential storage units of Potsdam, Beekmantown and Trenton saline aquifers are overlain by a multiple caprock system of Utica shales and Lorraine siltstones. The NE-SW regional normal faults dipping to the SE affect the subhorizontal sedimentary succession. The Covey Hill (Lower Potsdam) was found to be the only unit with significant CO2 sequestration potential, since these coarse-grained poorly-sorted fluvial-deltaic quartz-feldspar sandstones are characterized by the highest porosity, matrix permeability and net pay thickness and have the lowest static Young modulus, Poisson's ratio and compressive strength relative to other units. The Covey Hill is located at depths of 1145-1259 m, thus injected CO2 would be in supercritical state according to observed salinity, temperature and fluid pressure. The calcareous Utica shale of the regional seal is more brittle and has higher Young modulus and lower Poisson's ratio than the overlying Lorraine shale. The 3D geological model is kriged using the tops of the geological formations recorded at wells and picked travel times as external drift. The computed CO2 storage capacity in the Covey Hill sandstones is estimated by the volumetric and compressibility methods as 0.22 tons/km2 with storage efficiency factor E 2.4% and 0.09 tons/km2 with E 1%, respectively. A first set of numerical radial simulations of CO2 injection into the Covey Hill were carried out with TOUGH2/ECO2N. A geomechanical analysis of the St. Lawrence Platform sedimentary basin provides the maximum sustainable fluid pressures for CO2 injection that will not induce tensile fracturing and shear reactivation along pre-existing fractures and faults in the caprock. The regional stresses/pressure gradients estimated for the Paleozoic sedimentary basin (depths < 4 km) indicate a strike-slip stress regime. The average maximum horizontal stress orientation (SHmax) is estimated N62.8°E±4.0° in the Becancour-Notre Dame area. The high-angle NE-SW Yamaska normal fault is oriented at 16.7° to the SHmax orientation in the Becancour site. The slip tendency along the fault in this area is estimated to be 0.47 based on the stress magnitude and rock strength evaluations for the borehole breakout intervals in local wells. The regional pore pressure-stress coupling ratio under assumed parameters is about 0.5-0.65 and may contribute to reduce the risk of shear reactivation of faults and fractures. The maximum sustainable fluid pressure that would not cause opening of vertical tensile fractures during CO2 operations is about 18.5-20 MPa at a depth of 1 km.

  8. Estimated prevalence of dengue viremia in Puerto Rican blood donations, 1995 through 2010.

    PubMed

    Petersen, Lyle R; Tomashek, Kay M; Biggerstaff, Brad J

    2012-08-01

    Dengue virus (DENV) nucleic acid amplification testing of blood donations during epidemics in endemic locations, including Puerto Rico, has suggested possible sizable transfusion transmission risk. Estimates of the long-term prevalence of DENV viremic donations will help evaluate the potential magnitude of this risk in Puerto Rico. Estimates of the prevalence of DENV viremia in the Puerto Rican population at large from 1995 through 2010 were derived from dengue case reports and their onset dates obtained from islandwide surveillance, estimates of case underreporting, and extant data on the duration of DENV viremia and the unapparent-to-apparent dengue infection ratio. Under the assumptions that viremia prevalence in blood donors was similar to that of the population at large and that symptomatic persons do not donate, statistical resampling methods were used to estimate the prevalence of dengue viremia in blood donations. Over the 16-year period, the maximum and mean daily prevalences of dengue viremia (per 10,000) in blood donations in Puerto Rico were estimated at 45.0 (95% confidence interval [CI], 36.5-55.4) and 7.0 (95% CI, 3.9-10.1), respectively. Prevalence varied considerably by season and year. These data suggest a substantial prevalence of DENV viremia in Puerto Rican blood donations, particularly during outbreaks. © 2012 American Association of Blood Banks.

  9. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  10. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Pet snakes illegally marketed in Brazil: Climatic viability and establishment risk.

    PubMed

    Fonseca, Érica; Solé, Mirco; Rödder, Dennis; de Marco, Paulo

    2017-01-01

    Invasive species are one among many threats to biodiversity. Brazil has been spared, generically, of several destructive invasive species. Reports of invasive snakes' populations are nonexistent, but the illegal pet trade might change this scenario. Despite the Brazilian laws forbid to import most animals, illegal trade is frequently observed and propagules are found in the wild. The high species richness within Brazilian biomes and accelerated fragmentation of natural reserves are a critical factors facilitating successful invasion. An efficient way to ease damages caused by invasive species is identifying potential invaders and consequent prevention of introductions. For the identification of potential invaders many factors need to be considered, including estimates of climate matching between areas (native vs. invaded). Ecological niche modelling has been widely used to predict potential areas for invasion and is an important tool for conservation biology. This study evaluates the potential geographical distribution and establishment risk of Lampropeltis getula (Linnaeus, 1766), Lampropeltis triangulum (Lacépède, 1789), Pantherophis guttatus (Linnaeus, 1766), Python bivittatus Kuhl, 1820 and Python regius (Shaw, 1802) through the Maximum Entropy modelling approach to estimate the potential distribution of the species within Brazil and qualitative evaluation of specific biological attributes. Our results suggest that the North and Midwest regions harbor major suitable areas. Furthermore, P. bivittatus and P. guttatus were suggested to have the highest invasive potential among the analyzed species. Potentially suitable areas for these species were predicted within areas which are highly relevant for Brazilian biodiversity, including several conservation units. Therefore, these areas require special attention and preventive measures should be adopted.

  12. Pet snakes illegally marketed in Brazil: Climatic viability and establishment risk

    PubMed Central

    Rödder, Dennis; de Marco, Paulo

    2017-01-01

    Invasive species are one among many threats to biodiversity. Brazil has been spared, generically, of several destructive invasive species. Reports of invasive snakes’ populations are nonexistent, but the illegal pet trade might change this scenario. Despite the Brazilian laws forbid to import most animals, illegal trade is frequently observed and propagules are found in the wild. The high species richness within Brazilian biomes and accelerated fragmentation of natural reserves are a critical factors facilitating successful invasion. An efficient way to ease damages caused by invasive species is identifying potential invaders and consequent prevention of introductions. For the identification of potential invaders many factors need to be considered, including estimates of climate matching between areas (native vs. invaded). Ecological niche modelling has been widely used to predict potential areas for invasion and is an important tool for conservation biology. This study evaluates the potential geographical distribution and establishment risk of Lampropeltis getula (Linnaeus, 1766), Lampropeltis triangulum (Lacépède, 1789), Pantherophis guttatus (Linnaeus, 1766), Python bivittatus Kuhl, 1820 and Python regius (Shaw, 1802) through the Maximum Entropy modelling approach to estimate the potential distribution of the species within Brazil and qualitative evaluation of specific biological attributes. Our results suggest that the North and Midwest regions harbor major suitable areas. Furthermore, P. bivittatus and P. guttatus were suggested to have the highest invasive potential among the analyzed species. Potentially suitable areas for these species were predicted within areas which are highly relevant for Brazilian biodiversity, including several conservation units. Therefore, these areas require special attention and preventive measures should be adopted. PMID:28817630

  13. Electrochemical double layers at the interface between glassy electrolytes and platinum: Differentiating between the anode and the cathode capacitance

    NASA Astrophysics Data System (ADS)

    Kruempelmann, J.; Mariappan, C. R.; Schober, C.; Roling, B.

    2010-12-01

    We have measured potential-dependent interfacial capacitances of two Na-Ca-phosphosilicate glasses and of an AgI-doped silver borate glass between ion-blocking Pt electrodes. An asymmetric electrode configuration with highly dissimilar electrode areas on both faces of the glass samples allowed us to determine the capacitance at the small-area electrode. Using equivalent circuit fitting we extract potential-dependent double-layer capacitances. The potential-dependent anodic capacitance exhibits a weak maximum and drops strongly at higher potentials. The cathodic capacitance exhibits a more pronounced maximum, this maximum being responsible for the maximum in the total capacitance observed in measurements in a symmetrical electrode configuration. The capacitance maxima of the Na-Ca phosphosilicate glasses show up at higher electrode potentials than the maxima of the AgI-doped silver borate glass. Remarkably, for both types of glasses, the potential of the cathodic capacitance maximum is closely related to the activation energy of the bulk ion transport. We compare our results to recent theoretical predictions by Shklovskii and co-workers.

  14. Using population models to evaluate management alternatives for Gulf Striped Bass

    USGS Publications Warehouse

    Aspinwall, Alexander P.; Irwin, Elise R.; Lloyd, M. Clint

    2017-01-01

    Interstate management of Gulf Striped Bass Morone saxatilis has involved a thirty-year cooperative effort involving Federal and State agencies in Georgia, Florida and Alabama (Apalachicola-Chattahoochee-Flint Gulf Striped Bass Technical Committee). The Committee has recently focused on developing an adaptive framework for conserving and restoring Gulf Striped Bass in the Apalachicola, Chattahoochee, and Flint River (ACF) system. To evaluate the consequences and tradeoffs among management activities, population models were used to inform management decisions. Stochastic matrix models were constructed with varying recruitment and stocking rates to simulate effects of management alternatives on Gulf Striped Bass population objectives. An age-classified matrix model that incorporated stock fecundity estimates and survival estimates was used to project population growth rate. In addition, combinations of management alternatives (stocking rates, Hydrilla control, harvest regulations) were evaluated with respect to how they influenced Gulf Striped Bass population growth. Annual survival and mortality rates were estimated from catch-curve analysis, while fecundity was estimated and predicted using a linear least squares regression analysis of fish length versus egg number from hatchery brood fish data. Stocking rates and stocked-fish survival rates were estimated from census data. Results indicated that management alternatives could be an effective approach to increasing the Gulf Striped Bass population. Population abundance was greatest under maximum stocking effort, maximum Hydrilla control and a moratorium. Conversely, population abundance was lowest under no stocking, no Hydrilla control and the current harvest regulation. Stocking rates proved to be an effective management strategy; however, low survival estimates of stocked fish (1%) limited the potential for population growth. Hydrilla control increased the survival rate of stocked fish and provided higher estimates of population abundances than maximizing the stocking rate. A change in the current harvest regulation (50% harvest regulation) was not an effective alternative to increasing the Gulf Striped Bass population size. Applying a moratorium to the Gulf Striped Bass fishery increased survival rates from 50% to 74% and resulted in the largest population growth of the individual management alternatives. These results could be used by the Committee to inform management decisions for other populations of Striped Bass in the Gulf Region.

  15. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials

    PubMed Central

    Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  16. A new method for evaluating impacts of data assimilation with respect to tropical cyclone intensity forecast problem

    NASA Astrophysics Data System (ADS)

    Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.

    2012-12-01

    A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.

  17. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  18. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  19. Characterization of contact structures for the spread of infectious diseases in a pork supply chain in northern Germany by dynamic network analysis of yearly and monthly networks.

    PubMed

    Büttner, K; Krieter, J; Traulsen, I

    2015-04-01

    A major risk factor in the spread of diseases between holdings is the transport of live animals. This study analysed the animal movements of the pork supply chain of a producer group in Northern Germany. The parameters in-degree and out-degree, ingoing and outgoing infection chain, betweenness and ingoing and outgoing closeness were measured using dynamic network analysis to identify holdings with central positions in the network and to characterize the overall network topology. The potential maximum epidemic size was also estimated. All parameters were calculated for three time periods: the 3-yearly network, the yearly and the monthly networks. The yearly and the monthly networks were more fragmented than the 3-yearly network. On average, one-third of the holdings were isolated in the yearly networks and almost three quarters in the monthly networks. This represented an immense reduction in the number of holdings participating in the trade of the monthly networks. The overall network topology showed right-skewed distributions for all calculated centrality parameters indicating that network resilience was high concerning the random removal of holdings. However, for a targeted removal of holdings according to their centrality, a rapid fragmentation of the trade network could be expected. Furthermore, to capture the real importance of holdings for disease transmission, indirect trade contacts (infection chain) should be considered. In contrast to the parameters regarding direct trade contacts (degree), the infection chain parameter did not underestimate the potential risk of disease transmission. This became more obvious, the longer the observed time period was. For all three time periods, the results for the estimation of the potential maximum epidemic size illustrated that the outgoing infection chain should be chosen. It considers the chronological order and the directed nature of the contacts and has no restrictions such as the strongly connected components of a cyclic network. © 2013 Blackwell Verlag GmbH.

  20. Estimation of proliferative potentiality of central neurocytoma: correlational analysis of minimum ADC and maximum SUV with MIB-1 labeling index.

    PubMed

    Sakamoto, Ryo; Okada, Tomohisa; Kanagaki, Mitsunori; Yamamoto, Akira; Fushimi, Yasutaka; Kakigi, Takahide; Arakawa, Yoshiki; Takahashi, Jun C; Mikami, Yoshiki; Togashi, Kaori

    2015-01-01

    Central neurocytoma was initially believed to be benign tumor type, although atypical cases with more aggressive behavior have been reported. Preoperative estimation for proliferating activity of central neurocytoma is one of the most important considerations for determining tumor management. To investigate predictive values of image characteristics and quantitative measurements of minimum apparent diffusion coefficient (ADCmin) and maximum standardized uptake value (SUVmax) for proliferative activity of central neurocytoma measured by MIB-1 labeling index (LI). Twelve cases of central neurocytoma including one recurrence from January 2001 to December 2011 were included. Preoperative scans were conducted in 11, nine, and five patients for computed tomography (CT), diffusion-weighted imaging (DWI), and fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET), respectively, and ADCmin and SUVmax of the tumors were measured. Image characteristics were investigated using CT, T2-weighted (T2W) imaging and contrast-enhanced T1-weighted (T1W) imaging, and their differences were examined using the Fisher's exact test between cases with MIB-1 LI below and above 2%, which is recognized as typical and atypical central neurocytoma, respectively. Correlational analysis was conducted for ADCmin and SUVmax with MIB-1 LI. A P value <0.05 was considered significant. Morphological appearances had large variety, and there was no significant correlation with MIB-1 LI except a tendency that strong enhancement was observed in central neurocytomas with higher MIB-1 LI (P = 0.061). High linearity with MIB-1 LI was observed in ADCmin and SUVmax (r = -0.91 and 0.74, respectively), but only ADCmin was statistically significant (P = 0.0006). Central neurocytoma had a wide variety of image appearance, and assessment of proliferative potential was considered difficult only by morphological aspects. ADCmin was recognized as a potential marker for differentiation of atypical central neurocytomas from the typical ones. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

Top