DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito-Lopez, Bernardino, E-mail: benitobl@um.es; Rocio Moreno-Enguix, Maria del, E-mail: mrmoreno@um.es; Solana-Ibanez, Jose, E-mail: jsolana@um.es
Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply themore » second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient.« less
Benito-López, Bernardino; Moreno-Enguix, María del Rocio; Solana-Ibañez, José
2011-06-01
Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply the second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogawa, Akira; Anzou, Hideki; Yamamoto, So; Shimagaki, Mituru
2015-11-01
In order to control the maximum tangential velocity Vθm(m/s) of the turbulent rotational air flow and the collection efficiency ηc (%) using the fly ash of the mean diameter XR50=5.57 µm, two secondary jet nozzles were installed to the body of the axial flow cyclone dust collector with the body diameter D1=99mm. Then in order to estimate Vθm (m/s), the conservation theory of the angular momentum flux with Ogawa combined vortex model was applied. The comparisons of the estimated results of Vθm(m/s) with the measured results by the cylindrical Pitot-tube were shown in good agreement. And also the estimated collection efficiencies ηcth (%) basing upon the cut-size Xc (µm) which was calculated by using the estimated Vθ m(m/s) and also the particle size distribution R(Xp) were shown a little higher values than the experimental results due to the re-entrainment of the collected dust. The best method for adjustment of ηc (%) related to the contribution of the secondary jet flow is principally to apply the centrifugal effect Φc (1). Above stated results are described in detail.
Clack, Herek L
2012-07-03
The behavior of mercury sorbents within electrostatic precipitators (ESPs) is not well-understood, despite a decade or more of full-scale testing. Recent laboratory results suggest that powdered activated carbon exhibits somewhat different collection behavior than fly ash in an ESP and particulate filters located at the outlet of ESPs have shown evidence of powdered activated carbon penetration during full-scale tests of sorbent injection for mercury emissions control. The present analysis considers a range of assumed differential ESP collection efficiencies for powdered activated carbon as compared to fly ash. Estimated emission rates of submicrometer powdered activated carbon are compared to estimated emission rates of particulate carbon on submicrometer fly ash, each corresponding to its respective collection efficiency. To the extent that any emitted powdered activated carbon exhibits size and optical characteristics similar to black carbon, such emissions could effectively constitute an increase in black carbon emissions from coal-based stationary power generation. The results reveal that even for the low injection rates associated with chemically impregnated carbons, submicrometer particulate carbon emissions can easily double if the submicrometer fraction of the native fly ash has a low carbon content. Increasing sorbent injection rates, larger collection efficiency differentials as compared to fly ash, and decreasing sorbent particle size all lead to increases in the estimated submicrometer particulate carbon emissions.
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
75 FR 8722 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
... Efficient Health Care in Federal Government Administered or Sponsored Health Care Programs,'' performance... collection for the proper performance of the agency's functions; (2) the accuracy of the estimated burden; (3... of Information Collection: Part C and D Complaints Resolution Performance Measures: Use: Part C...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-28
... could change significantly based on the collection method ultimately used in the research. Estimated..., Comment Request: Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... qualitative consumer and stakeholder feedback in an efficient, timely manner to facilitate service delivery...
Estimating means and variances: The comparative efficiency of composite and grab samples.
Brumelle, S; Nemetz, P; Casey, D
1984-03-01
This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.
Agemura, Toshihide; Sekiguchi, Takashi
2018-02-01
Collection efficiency and acceptance maps of typical detectors in modern scanning electron microscopes (SEMs) were investigated. Secondary and backscattered electron trajectories from a specimen to through-the-lens and under-the-lens detectors placed on an electron optical axis and an Everhart-Thornley detector mounted on a specimen chamber were simulated three-dimensionally. The acceptance maps were drawn as the relationship between the energy and angle of collected electrons under different working distances. The collection efficiency considering the detector sensitivity was also estimated for the various working distances. These data indicated that the acceptance maps and collection efficiency are keys to understand the detection mechanism and image contrast for each detector in the modern SEMs. Furthermore, the working distance is the dominant parameter because electron trajectories are drastically changed with the working distance.
Efficacy of using data from angler-caught Burbot to estimate population rate functions
Brauer, Tucker A.; Rhea, Darren T.; Walrath, John D.; Quist, Michael C.
2018-01-01
The effective management of a fish population depends on the collection of accurate demographic data from that population. Since demographic data are often expensive and difficult to obtain, developing cost‐effective and efficient collection methods is a high priority. This research evaluates the efficacy of using angler‐supplied data to monitor a nonnative population of Burbot Lota lota. Age and growth estimates were compared between Burbot collected by anglers and those collected in trammel nets from two Wyoming reservoirs. Collection methods produced different length‐frequency distributions, but no difference was observed in age‐frequency distributions. Mean back‐calculated lengths at age revealed that netted Burbot grew faster than angled Burbot in Fontenelle Reservoir. In contrast, angled Burbot grew slightly faster than netted Burbot in Flaming Gorge Reservoir. Von Bertalanffy growth models differed between collection methods, but differences in parameter estimates were minor. Estimates of total annual mortality (A) of Burbot in Fontenelle Reservoir were comparable between angled (A = 35.4%) and netted fish (33.9%); similar results were observed in Flaming Gorge Reservoir for angled (29.3%) and netted fish (30.5%). Beverton–Holt yield‐per‐recruit models were fit using data from both collection methods. Estimated yield differed by less than 15% between data sources and reservoir. Spawning potential ratios indicated that an exploitation rate of 20% would be required to induce recruitment overfishing in either reservoir, regardless of data source. Results of this study suggest that angler‐supplied data are useful for monitoring Burbot population dynamics in Wyoming and may be an option to efficiently monitor other fish populations in North America.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... provides a consistent time series according to which groundfish resources may be managed more efficiently...: Business or other for-profit organizations. Estimated Number of Respondents: 166. Estimated Time per...
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
Profile soil property estimation using a VIS-NIR-EC-force probe
USDA-ARS?s Scientific Manuscript database
Combining data collected in-field from multiple soil sensors has the potential to improve the efficiency and accuracy of soil property estimates. Optical diffuse reflectance spectroscopy (DRS) has been used to estimate many important soil properties, such as soil carbon, water content, and texture. ...
Collecting various sustainability metrics of observatory operations on Maunakea
NASA Astrophysics Data System (ADS)
Kuo Tiong, Blaise C.; Bauman, Steven E.; Benedict, Romilly; Draughn, John Wesley; Probasco, Quinn
2016-07-01
By collecting metrics in fleet operations, data center usage, employee air travel and facilities consumption at the Canada France Hawaii Telescope, the collective impact of CFHT and other observatories on the Maunakea Astronomy Precinct can be estimated. An audit of carbon emissions in these aspects as well as specific efficiency metrics such as data center Power Use Efficiency gives a general scale of environmental and social alterations. Applications of the audit would be for such things as crafting sustainability strategies.
Comparison of field and laboratory VNIR spectroscopy for profile soil property estimation
USDA-ARS?s Scientific Manuscript database
In-field, in-situ data collection with soil sensors has potential to improve the efficiency and accuracy of soil property estimates. Optical diffuse reflectance spectroscopy (DRS) has been used to estimate important soil properties, such as soil carbon, nitrogen, water content, and texture. Most pre...
Estimation of soil profile physical and chemical properties using a VIS-NIR-EC-force probe
USDA-ARS?s Scientific Manuscript database
Combining data collected in-field from multiple soil sensors has the potential to improve the efficiency and accuracy of soil property estimates. Optical diffuse reflectance spectroscopy (DRS) has been used to estimate many important soil properties, such as soil carbon, water content, and texture. ...
Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee
2016-04-01
In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
...'s Center for Devices and Radiological Health (CDRH) to easily and efficiently elicit and review information from students and health care professionals who are interested in becoming involved in CDRH... expertise with CDRH. FDA estimates the burden of this collection of information as follows: Table 1...
State road fund revenue collection processes : differences and opportunities of improved efficiency
DOT National Transportation Integrated Search
2001-07-01
Research regarding the administration and collection of road fund revenues has focused on gaining an understanding of the motivations for tax evasion, methods of evasion, and estimates of the magnitude of evasion for individual states. To our knowled...
CTER-rapid estimation of CTF parameters with error assessment.
Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.
Simulation of Fluid Flow and Collection Efficiency for an SEA Multi-element Probe
NASA Technical Reports Server (NTRS)
Rigby, David L.; Struk, Peter M.; Bidwell, Colin
2014-01-01
Numerical simulations of fluid flow and collection efficiency for a Science Engineering Associates (SEA) multi-element probe are presented. Simulation of the flow field was produced using the Glenn-HT Navier-Stokes solver. Three-dimensional unsteady results were produced and then time averaged for the heat transfer and collection efficiency results. Three grid densities were investigated to enable an assessment of grid dependence. Simulations were completed for free stream velocities ranging from 85-135 meters per second, and free stream total pressure of 44.8 and 93.1 kilopascals (6.5 and 13.5 pounds per square inch absolute). In addition, the effect of angle of attack and yaw were investigated by including 5 degree deviations from straight for one of the flow conditions. All but one of the cases simulated a probe in isolation (i.e. in a very large domain without any support strut). One case is included which represents a probe mounted on a support strut within a finite sized wind tunnel. Collection efficiencies were generated, using the LEWICE3D code, for four spherical particle sizes, 100, 50, 20, and 5 micron in diameter. It was observed that a reduction in velocity of about 20% occurred, for all cases, as the flow entered the shroud of the probe. The reduction in velocity within the shroud is not indicative of any error in the probe measurement accuracy. Heat transfer results are presented which agree quite well with a correlation for the circular cross section heated elements. Collection efficiency results indicate a reduction in collection efficiency as particle size is reduced. The reduction with particle size is expected, however, the results tended to be lower than the previous results generated for isolated two-dimensional elements. The deviation from the two-dimensional results is more pronounced for the smaller particles and is likely due to the reduced flow within the protective shroud. As particle size increases differences between the two-dimensional and three dimensional results become negligible. Taken as a group, the total collection efficiency of the elements including the effects of the shroud has been shown to be in the range of 0.93 to 0.99 for particles above 20 microns. The 3D model has improved the estimated collection efficiency for smaller particles where errors in previous estimates were more significant.
NASA Astrophysics Data System (ADS)
Sparks, L. E.; Ramsey, G. H.; Daniel, B. E.
The results of pilot plant experiments of particulate collection by a venturi scrubber downstream from an electrostatic precipitator (ESP) are presented. The data, which cover a range of scrubber operating conditions and ESP efficiencies, show that particle collection by the venturi scrubber is not affected by the upstream ESP; i.e., for a given scrubber pressure drop, particle collection efficiency as a function of particle diameter is the same for both ESP on and ESP off. The experimental results are in excellent agreement with theoretical predictions. Order of magnitude cost estimates indicate that particle collection by ESP scrubber systems may be economically attractive when scrubbers must be used for SO x control.
Automated Assessment of Child Vocalization Development Using LENA.
Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-07-12
To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.
Engineering evaluation of the use of the Timberline condensing economizer for particulate collection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butcher, T.; Serry, H.
1980-12-01
The possible use of the Timberline Industries condensing economizer as a particulate collection device on commercial sector boilers which are being converted to coal-oil mixture (COM) firing has been considered. The saturation temperature of the water vapor in the flue gas has been estimated as a function of excess air and ambient relative humidity. Also, boiler stack losses have been estimated for a variety of operating conditions including stack temperatures below the dew point. The condensing economizer concept will be limited to applications which can use the low temperature heat including water heating and forced air space heating. The potentialmore » particulate collection efficiency, water disposal, and similar heat recovery devices are discussed. A cost analysis is presented which indicates that the economizer system is not competitive with a cyclone but is competitive with a baghouse. The use of the cyclone is limited by collection efficiency. The measurement of COM flyash particle size distribution is recommended.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Baumgartner, Robert
This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savingsmore » from energy efficiency programs.« less
Nest Mosquito Trap quantifies contact rates between nesting birds and mosquitoes.
Caillouët, Kevin A; Riggan, Anna E; Rider, Mark; Bulluck, Lesley P
2012-06-01
Accurate estimates of host-vector contact rates are required for precise determination of arbovirus transmission intensity. We designed and tested a novel mosquito collection device, the Nest Mosquito Trap (NMT), to collect mosquitoes as they attempt to feed on unrestrained nesting birds in artificial nest boxes. In the laboratory, the NMT collected nearly one-third of the mosquitoes introduced to the nest boxes. We then used these laboratory data to estimate our capture efficiency of field-collected bird-seeking mosquitoes collected over 66 trap nights. We estimated that 7.5 mosquitoes per trap night attempted to feed on nesting birds in artificial nest boxes. Presence of the NMT did not have a negative effect on avian nest success when compared to occupied nest boxes that were not sampled with the trap. Future studies using the NMT may elucidate the role of nestlings in arbovirus transmission and further refine estimates of nesting bird and vector contact rates. © 2012 The Society for Vector Ecology.
Nest Mosquito Trap quantifies contact rates between nesting birds and mosquitoes
Caillouët, Kevin A.; Riggan, Anna E.; Rider, Mark; Bulluck, Lesley P.
2012-01-01
Accurate estimates of host-vector contact rates are required for precise determination of arbovirus transmission intensity. We designed and tested a novel mosquito collection device, the Nest Mosquito Trap (NMT), to collect mosquitoes as they attempt to feed on unrestrained nesting birds in artificial nest boxes. In the laboratory, the NMT collected nearly one-third of the mosquitoes introduced to the nest boxes. We then used these laboratory data to estimate our capture efficiency of field-collected bird-seeking mosquitoes collected over 66 trap nights. We estimated that 7.5 mosquitoes per trap night attempted to feed on nesting birds in artificial nest boxes. Presence of the NMT did not have a negative effect on avian nest success when compared to occupied nest boxes that were not sampled with the trap. Future studies using the NMT may elucidate the role of nestlings in arbovirus transmission and further refine estimates of nesting bird and vector contact rates. PMID:22548555
NASA Astrophysics Data System (ADS)
Cena, Lorenzo
2011-12-01
The overall goals of this doctoral dissertation are to provide knowledge of workers' exposure to nanomaterials and to assist in the development of standard methods to measure personal exposure to nanomaterials in workplace environments. To achieve the first goal, a field study investigated airborne particles generated from the weighing of bulk carbon nanotubes (CNTs) and the manual sanding of epoxy test samples reinforced with CNTs. This study also evaluated the effectiveness of three local exhaust ventilation (LEV) conditions (no LEV, custom fume hood and biosafety cabinet) for control of exposure to particles generated during sanding of CNT-epoxy nanocomposites. Particle number and respirable mass concentrations were measured with direct-read instruments, and particle morphology was determined by electron microscopy. Sanding of CNT-epoxy nanocomposites released respirable size airborne particles with protruding CNTs very different in morphology from bulk CNTs that tended to remain in clusters (>1mum). Respirable mass concentrations in the operator's breathing zone were significantly greater when sanding took place in the custom hood (p <0.0001) compared to the other LEV conditions. This study found that workers' exposure was to particles containing protruding CNTs rather than to bulk CNT particles. Particular attention should be placed in the design and selection of hoods to minimize exposure. Two laboratory studies were conducted to realize the second goal. Collection efficiency of submicrometer particles was evaluated for nylon mesh screens with three pore sizes (60, 100 and 180 mum) at three flow rates (2.5, 4, and 6 Lpm). Single-fiber efficiency of nylon mesh screens was then calculated and compared to a theoretical estimation expression. The effects of particle morphology on collection efficiency were also experimentally measured. The collection efficiency of the screens was found to vary by less than 4% regardless of particle morphology. Single-fiber efficiency of the screens calculated from experimental data was in good agreement with that estimated from theory for particles between 40 and 150 nm but deviated from theory for particles outside of this range. New coefficients for the single-fiber efficiency model were identified that minimized the sum of square error (SSE) between the experimental values and those estimated with the model. Compared to the original theory, the SSE calculated using the modified theory was at least threefold lower for all screens and flow rates. Since nylon fibers produce no significant spectral interference when ashed for spectrometric examination, the ability to accurately estimate collection efficiency of submicrometer particles makes nylon mesh screens an attractive collection substrate for nanoparticles. In the third study, laboratory experiments were conducted to develop a novel nanoparticle respiratory deposition (NRD) sampler that selectively collects nanoparticles in a worker's breathing zone apart from larger particles. The NRD sampler consists of a respirable cyclone fitted with an impactor and a diffusion stage containing eight nylon-mesh screens. A sampling criterion for nano-particulate matter (NPM) was developed and set as the target for the collection efficiency of the NRD sampler. The sampler operates at 2.5 Lpm and fits on a worker's lapel. The cut-off diameter of the impactor was experimentally measured to be 300 nm with a sharpness of 1.53. Loading at typical workplace levels was found to have no significant effect (2-way ANOVA, p=0.257) on the performance of the impactor. The effective deposition of particles onto the diffusion stage was found to match the NPM criterion, showing that a sample collected with the NRD sampler represents the concentration of nanoparticles deposited in the human respiratory system.
A prototype national cattle evaluation for feed intake and efficiency of Angus cattle
USDA-ARS?s Scientific Manuscript database
Recent development of technologies for measuring individual feed intake has made possible the collection of data suitable for breed-wide genetic evaluation. Goals of this research were to estimate genetic parameters for components of feed efficiency and develop a prototype system for conducting a ge...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prevatte, Scott A.
2006-03-01
In the fall of 2004, as one part of a Basin-Wide Monitoring Program developed by the Upper Columbia Regional Technical Team and Upper Columbia Salmon Recovery Board, the Yakama Nation Fisheries Resource Management program began monitoring downstream migration of ESA listed Upper Columbia River spring chinook salmon and Upper Columbia River steelhead in Nason Creek, a tributary to the Wenatchee River. This report summarizes juvenile spring chinook salmon and steelhead trout migration data collected in Nason Creek during 2005 and also incorporates data from 2004. We used species enumeration at the trap and efficiency trials to describe emigration timing andmore » to estimate population size. Data collection was divided into spring/early summer and fall periods with a break during the summer months occurring due to low stream flow. Trapping began on March 1st and was suspended on July 29th when stream flow dropped below the minimum (30 cfs) required to rotate the trap cone. The fall period began on September 28th with increased stream flow and ended on November 23rd when snow and ice began to accumulate on the trap. During the spring and early summer we collected 311 yearling (2003 brood) spring chinook salmon, 86 wild steelhead smolts and 453 steelhead parr. Spring chinook (2004 brood) outgrew the fry stage of fork length < 60 mm during June and July, 224 were collected at the trap. Mark-recapture trap efficiency trials were performed over a range of stream discharge stages whenever ample numbers of fish were being collected. A total of 247 spring chinook yearlings, 54 steelhead smolts, and 178 steelhead parr were used during efficiency trials. A statically significant relationship between stream discharge and trap efficiency has not been identified in Nason Creek, therefore a pooled trap efficiency was used to estimate the population size of both spring chinook (14.98%) and steelhead smolts (12.96%). We estimate that 2,076 ({+-} 119 95%CI) yearling spring chinook and 688 ({+-} 140 95%CI) steelhead smolts emigrated past the trap during the spring/early summer sample period along with 10,721 ({+-} 1,220 95%CI) steelhead parr. During the fall we collected 924 subyearling (2004 brood) spring chinook salmon and 1,008 steelhead parr of various size and age classes. A total of 732 spring chinook subyearlings and 602 steelhead parr were used during 13 mark-recapture trap efficiency trials. A pooled trap efficiency of 24.59% was used to calculate the emigration of spring chinook and 17.11% was used for steelhead parr during the period from September 28th through November 23rd. We estimate that 3758 ({+-} 92 95%CI) subyearling spring chinook and 5,666 ({+-} 414 95%CI) steelhead parr migrated downstream past the trap along with 516 ({+-} 42 95%CI) larger steelhead pre-smolts during the 2005 fall sample period.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... for Devices and Radiological Health (CDRH) to easily and efficiently elicit and review information from students and health care professionals who are interested in becoming involved in CDRH activities... expertise with CDRH. FDA based these estimates on the number of inquiries that have been received concerning...
Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh
2017-01-01
In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
ERIC Educational Resources Information Center
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
ERIC Educational Resources Information Center
Chawla, Deepika; Forbes, Phyllis
2010-01-01
Increasing accountability and efficiency in the use of public and out-of-pocket financing in education are critical to realizing the maximum impact of the meager allocations to education in most developing countries. While broad estimates and numbers are routinely collected by most national ministries and state departments of education, the lack…
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
ERIC Educational Resources Information Center
Lemaire, Patrick; Lecacheur, Mireille
2011-01-01
Third, fifth, and seventh graders selected the best strategy (rounding up or rounding down) for estimating answers to two-digit addition problems. Executive function measures were collected for each individual. Data showed that (a) children's skill at both strategy selection and execution improved with age and (b) increased efficiency in executive…
Applying spectral data analysis techniques to aquifer monitoring data in Belvoir Ranch, Wyoming
NASA Astrophysics Data System (ADS)
Gao, F.; He, S.; Zhang, Y.
2017-12-01
This study uses spectral data analysis techniques to estimate the hydraulic parameters from water level fluctuation due to tide effect and barometric effect. All water level data used in this study are collected in Belvoir Ranch, Wyoming. Tide effect can be not only observed in coastal areas, but also in inland confined aquifers. The force caused by changing positions of sun and moon affects not only ocean but also solid earth. The tide effect has an oscillatory pumping or injection sequence to the aquifer, and can be observed from dense water level monitoring. Belvoir Ranch data are collected once per hour, thus is dense enough to capture the tide effect. First, transforming de-trended data from temporal domain to frequency domain with Fourier transform method. Then, the storage coefficient can be estimated using Bredehoeft-Jacob model. After this, analyze the gain function, which expresses the amplification and attenuation of the output signal, and derive barometric efficiency. Next, find effective porosity with storage coefficient and barometric efficiency with Jacob's model. Finally, estimate aquifer transmissivity and hydraulic conductivity using Paul Hsieh's method. The estimated hydraulic parameters are compared with those from traditional pumping data estimation. This study proves that hydraulic parameter can be estimated by only analyze water level data in frequency domain. It has the advantages of low cost and environmental friendly, thus should be considered for future use of hydraulic parameter estimations.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Nanohole Structuring for Improved Performance of Hydrogenated Amorphous Silicon Photovoltaics.
Johlin, Eric; Al-Obeidi, Ahmed; Nogay, Gizem; Stuckelberger, Michael; Buonassisi, Tonio; Grossman, Jeffrey C
2016-06-22
While low hole mobilities limit the current collection and efficiency of hydrogenated amorphous silicon (a-Si:H) photovoltaic devices, attempts to improve mobility of the material directly have stagnated. Herein, we explore a method of utilizing nanostructuring of a-Si:H devices to allow for improved hole collection in thick absorber layers. This is achieved by etching an array of 150 nm diameter holes into intrinsic a-Si:H and then coating the structured material with p-type a-Si:H and a conformal zinc oxide transparent conducting layer. The inclusion of these nanoholes yields relative power conversion efficiency (PCE) increases of ∼45%, from 7.2 to 10.4% PCE for small area devices. Comparisons of optical properties, time-of-flight mobility measurements, and internal quantum efficiency spectra indicate this efficiency is indeed likely occurring from an improved collection pathway provided by the nanostructuring of the devices. Finally, we estimate that through modest optimizations of the design and fabrication, PCEs of beyond 13% should be obtainable for similar devices.
Improvement of automatic control system for high-speed current collectors
NASA Astrophysics Data System (ADS)
Sidorov, O. A.; Goryunov, V. N.; Golubkov, A. S.
2018-01-01
The article considers the ways of regulation of pantographs to provide quality and reliability of current collection at high speeds. To assess impact of regulation was proposed integral criterion of the quality of current collection, taking into account efficiency and reliability of operation of the pantograph. The study was carried out using mathematical model of interaction of pantograph and catenary system, allowing to assess contact force and intensity of arcing at the contact zone at different movement speeds. The simulation results allowed us to estimate the efficiency of different methods of regulation of pantographs and determine the best option.
Effectiveness of Condition-Based Maintenance in Army Aviation
2009-06-12
for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this...increase in efficiency in dollars spent per operational flight hour, the data set was too small to draw major conclusions. Recommendations for
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
Noninvasive methods for monitoring bear population trends
Kendall, Katherine
2010-01-01
The U.S. Geological Survey began a grizzly bear research project in 2009 in the Northern Continental Divide Ecosystem (NCDE) of northwestern Montana. This work uses hair collection and DNA analysis methods similar to those used in the 2004 Northern Divide Grizzly Bear Project. However, instead of producing a snapshot of population size, the objectives of this new work are to estimate population growth rates by collecting hair at natural bear rubs along trails, roads, and fence and power lines. This approach holds promise of providing reliable estimates of population trends in an efficient, cost-effective, and unobtrusive way.
Highly Efficient Nd:yag Lasers for Free-space Optical Communications
NASA Technical Reports Server (NTRS)
Sipes, D. L., Jr.
1985-01-01
A highly efficient Nd:YAG laser end-pumped by semiconductor lasers as a possible free-space optical communications source is discussed. Because this concept affords high pumping densities, a long absorption length, and excellent mode-matching characteristics, it is estimated that electrical-to-optical efficiencies greater than 5% could be achieved. Several engineering aspects such as resonator size and configuration, pump collecting optics, and thermal effects are also discussed. Finally, possible methods for combining laser-diode pumps to achieve higher output powers are illustrated.
2014-09-01
quarter. Deep natural language understanding , efficient inference, pragmatics, background knowledge U U U SAR 4 Dr. David McDonald (781) 718-1964 C3...effective and efficient way to marshal inferences from background knowledge ’ N00014-13-1-0228 Dr. David McDonald Smart Information Flow Technologies, dba...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
ERIC Educational Resources Information Center
Ford, Norman C.; Kane, Joseph W.
1971-01-01
Proposes a method of collecting solar energy by using available plastics for Fresnel lenses to focus heat onto a converter where thermal dissociation of water would produce hydrogen. The hydrogen would be used as an efficient non-polluting fuel. Cost estimates are included. (AL)
Optimal nonimaging integrated evacuated solar collector
NASA Astrophysics Data System (ADS)
Garrison, John D.; Duff, W. S.; O'Gallagher, Joseph J.; Winston, Roland
1993-11-01
A non imaging integrated evacuated solar collector for solar thermal energy collection is discussed which has the lower portion of the tubular glass vacuum enveloped shaped and inside surface mirrored to optimally concentrate sunlight onto an absorber tube in the vacuum. This design uses vacuum to eliminate heat loss from the absorber surface by conduction and convection of air, soda lime glass for the vacuum envelope material to lower cost, optimal non imaging concentration integrated with the glass vacuum envelope to lower cost and improve solar energy collection, and a selective absorber for the absorbing surface which has high absorptance and low emittance to lower heat loss by radiation and improve energy collection efficiency. This leads to a very low heat loss collector with high optical collection efficiency, which can operate at temperatures up to the order of 250 degree(s)C with good efficiency while being lower in cost than current evacuated solar collectors. Cost estimates are presented which indicate a cost for this solar collector system which can be competitive with the cost of fossil fuel heat energy sources when the collector system is produced in sufficient volume. Non imaging concentration, which reduces cost while improving performance, and which allows efficient solar energy collection without tracking the sun, is a key element in this solar collector design.
Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L
2017-11-01
When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.
Kock, Tobias J.; Liedtke, Theresa L.; Ekstrom, Brian K.; Tomka, Ryan G.; Rondorf, Dennis W.
2014-01-01
Collection of juvenile salmonids at Cowlitz Falls Dam is a critical part of the effort to restore salmon in the upper Cowlitz River because the majority of fish that are not collected at the dam pass downstream and enter a large reservoir where they become landlocked and lost to the anadromous fish population. However, the juvenile fish collection system at Cowlitz Falls Dam has failed to achieve annual collection goals since it first began operating in 1996. Since that time, numerous modifications to the fish collection system have been made and several prototype collection structures have been developed and tested, but these efforts have not substantially increased juvenile fish collection. Studies have shown that juvenile steelhead (Oncorhynchus mykiss), coho salmon (Oncorhynchus kisutch), and Chinook salmon (Oncorhynchus tshawytscha) tend to locate the collection entrances effectively, but many of these fish are not collected and eventually pass the dam through turbines or spillways. Tacoma Power developed a prototype weir box in 2009 to increase capture rates of juvenile salmonids at the collection entrances, and this device proved to be successful at retaining those fish that entered the weir. However, because of safety concerns at the dam, the weir box could not be deployed near a spillway gate where the prototype was tested, so the device was altered and re-deployed at a different location, where it was evaluated during 2013. The U.S. Geological Survey conducted an evaluation using radiotelemetry to monitor fish behavior near the weir box and collection flumes. The evaluation was conducted during April–June 2013. Juvenile steelhead and coho salmon (45 per species) were tagged with a radio transmitter and passive integrated transponder (PIT) tag, and released upstream of the dam. All tagged fish moved downstream and entered the forebay of Cowlitz Falls Dam. Median travel times from the release site to the forebay were 0.8 d for steelhead and 1.2 d for coho salmon. Most fish spent several days in the dam forebay; median forebay residence times were 4.4 d for juvenile steelhead and 5.7 d for juvenile coho salmon. A new radio transmitter model was used during the study period. The transmitter had low detection probabilities on underwater antennas located within the collection system, which prevented us from reporting performance metrics (discovery efficiency, entrance efficiency, retention efficiency) that are traditionally used to evaluate fish collection systems. Most tagged steelhead (98 percent) and coho salmon (84 percent) were detected near the weir box or collection flume entrances during the study period; 39 percent of tagged steelhead and 55 percent of tagged coho salmon were detected at both entrances. Sixty-three percent of the tagged steelhead that were detected at both entrances were first detected at the weir box, compared to 52 percent of the coho salmon. Twelve steelhead and 15 coho salmon detected inside the weir box eventually left the device and were collected in collection flumes or passed the dam. Overall, collection rates were relatively high during the study period. Sixty-five percent of the steelhead and 80 percent of the coho salmon were collected during the study, and most of the remaining fish passed the dam and entered the tailrace (24 percent of steelhead; 13 percent of coho salmon). The remaining 11 percent of steelhead and 7 percent of coho salmon did not pass the dam while their transmitters were operating. We were able to confirm collection of tagged fish at the fish facility using three approaches: (1) detection of radio transmitters in study fish; (2) detection of PIT-tags in study fish; (3) observation of study fish by staff at the fish facility. Data from all three methods were used to develop a multistate mark-recapture model that estimated detection probabilities for the various monitoring methods. These estimates then were used to describe the percent of tagged fish that were collected through the weir box and collection flumes. Detection probabilities of PIT-tag antennas in the collection flumes were 0.895 for juvenile steelhead and 0.881 for juvenile coho salmon, although radiotelemetry detection probabilities were 0.654 and 0.646 for the two species, respectively. The multistate model estimates showed that all steelhead and most coho salmon (94.5 percent) that were collected at the dam entered the collection system through the flumes rather than through the weir box. None of the tagged steelhead and only 5.5 percent of the tagged coho salmon were collected through the weir box. These data show that juvenile steelhead and coho salmon collection rates were much higher through the collection flumes than through the weir box. Low detection probabilities of tagged fish in the fish collection system resulted in uncertainty for some aspects of our evaluation. Missing detection records within the collection system for fish that were known to have been collected resulted in four tagged steelhead and seven tagged coho salmon being removed from the dataset, which was used to assess discovery rates of the weir box and collection flumes. However, the multistate model allowed us to provide unbiased estimates of the percentage of tagged fish that were collected through each route, and these data showed that few fish were collected through the weir box. Overall, the fish collection system performed reasonably well in collecting juvenile steelhead and coho salmon during the 2013 collection season. Fish collection efficiency estimates from the Washington Department of Fish and Wildlife showed that steelhead collection efficiency was slightly higher than the 10-year average (46 percent compared to 42 percent), whereas coho salmon collection efficiency was more than twice as high as the 10-year average (63 percent compared to 30 percent). However, the performance of the weir box was poor because most fish were collected through the collection flumes.
Measurement and modeling of intrinsic transcription terminators
Cambray, Guillaume; Guimaraes, Joao C.; Mutalik, Vivek K.; Lam, Colin; Mai, Quynh-Anh; Thimmaiah, Tim; Carothers, James M.; Arkin, Adam P.; Endy, Drew
2013-01-01
The reliable forward engineering of genetic systems remains limited by the ad hoc reuse of many types of basic genetic elements. Although a few intrinsic prokaryotic transcription terminators are used routinely, termination efficiencies have not been studied systematically. Here, we developed and validated a genetic architecture that enables reliable measurement of termination efficiencies. We then assembled a collection of 61 natural and synthetic terminators that collectively encode termination efficiencies across an ∼800-fold dynamic range within Escherichia coli. We simulated co-transcriptional RNA folding dynamics to identify competing secondary structures that might interfere with terminator folding kinetics or impact termination activity. We found that structures extending beyond the core terminator stem are likely to increase terminator activity. By excluding terminators encoding such context-confounding elements, we were able to develop a linear sequence-function model that can be used to estimate termination efficiencies (r = 0.9, n = 31) better than models trained on all terminators (r = 0.67, n = 54). The resulting systematically measured collection of terminators should improve the engineering of synthetic genetic systems and also advance quantitative modeling of transcription termination. PMID:23511967
A Case Study in Transnational Crime: Ukraine and Modern Slavery
2007-06-01
remained unable to appropriate resources or plan efficiently. The full extent of the decline remains unknown, because statistics were manipulated to hide...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Sawant, Pramilla D; Kumar, Suja Arun; Wankhede, Sonal; Rao, D D
2018-06-01
In-vitro bioassay monitoring generally involves analysis of overnight urine samples (~12 h) collected from radiation workers to estimate the excretion rate of radionuclides from the body. The unknown duration of sample collection (10-16 h) adds to the overall uncertainty in computation of internal dose. In order to minimize this, IAEA recommends measurement of specific gravity or creatinine excretion rate in urine. Creatinine is excreted at a steady rate with normally functioning kidneys therefore, can be used as a normalization factor to infer the duration of collection and/or dilution of the sample, if any. The present study reports the chemical procedure standardized and its application for the estimation of creatinine as well as creatinine co-efficient in normal healthy individuals. Observations indicate higher inter-subject variability and lower constancy in daily excretion of creatinine for the same subject. Thus creatinine excretion rate may not be a useful indicator for extrapolating to 24 h sample collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pak, S I; Chang, K S
2006-12-01
A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements.
Energy use in the New Zealand food system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patterson, M.G.; Earle, M.D.
1985-03-01
The study covered the total energy requirements of the production, processing, wholesale distribution, retailing, shopping and household sectors of the food system in New Zealand. This included the direct energy requirements, and the indirect energy requirements in supplying materials, buildings and equipment. Data were collected from a wide range of literature sources, and converted into forms required for this research project. Also, data were collected in supplementary sample surveys at the wholesale distribution, retailing and shopping sectors. The details of these supplementary surveys are outlined in detailed survey reports fully referenced in the text. From these base data, the totalmore » energy requirements per unit product (MJ/kg) were estimated for a wide range of food chain steps. Some clear alternatives in terms of energy efficiency emerged from a comparison of these estimates. For example, it was found that it was most energy efficient to use dehydrated vegetables, followed by fresh vegetables, freeze dried vegetables, canned vegetables and then finally frozen vegetables.« less
Energy Savings Lifetimes and Persistence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Ian M.; Schiller, Steven R.; Todd, Annika
2016-02-01
This technical brief explains the concepts of energy savings lifetimes and savings persistence and discusses how program administrators use these factors to calculate savings for efficiency measures, programs and portfolios. Savings lifetime is the length of time that one or more energy efficiency measures or activities save energy, and savings persistence is the change in savings throughout the functional life of a given efficiency measure or activity. Savings lifetimes are essential for assessing the lifecycle benefits and cost effectiveness of efficiency activities and for forecasting loads in resource planning. The brief also provides estimates of savings lifetimes derived from amore » national collection of costs and savings for electric efficiency programs and portfolios.« less
Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki
2010-10-15
Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. Copyright © 2010 Elsevier B.V. All rights reserved.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi
2011-08-01
Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble
2016-06-17
Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.
Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd
2007-01-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd
2007-02-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.
2013-01-01
Background In recent years, there has been growing interest in measuring the efficiency of hospitals in Iran and several studies have been conducted on the topic. The main objective of this paper was to review studies in the field of hospital efficiency and examine the estimated technical efficiency (TE) of Iranian hospitals. Methods Persian and English databases were searched for studies related to measuring hospital efficiency in Iran. Ordinary least squares (OLS) regression models were applied for statistical analysis. The PRISMA guidelines were followed in the search process. Results A total of 43 efficiency scores from 29 studies were retrieved and used to approach the research question. Data envelopment analysis was the principal frontier efficiency method in the estimation of efficiency scores. The pooled estimate of mean TE was 0.846 (±0.134). There was a considerable variation in the efficiency scores between the different studies performed in Iran. There were no differences in efficiency scores between data envelopment analysis (DEA) and stochastic frontier analysis (SFA) techniques. The reviewed studies are generally similar and suffer from similar methodological deficiencies, such as no adjustment for case mix and quality of care differences. The results of OLS regression revealed that studies that included more variables and more heterogeneous hospitals generally reported higher TE. Larger sample size was associated with reporting lower TE. Conclusions The features of frontier-based techniques had a profound impact on the efficiency scores among Iranian hospital studies. These studies suffer from major methodological deficiencies and were of sub-optimal quality, limiting their validity and reliability. It is suggested that improving data collection and processing in Iranian hospital databases may have a substantial impact on promoting the quality of research in this field. PMID:23945011
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
How Does EIA Estimate Energy Consumption and End Uses in U.S. Homes?
2011-01-01
The Energy Information Administration (EIA) administers the Residential Energy Consumption Survey (RECS) to a nationally representative sample of housing units. Specially trained interviewers collect energy characteristics on the housing unit, usage patterns, and household demographics. This information is combined with data from energy suppliers to these homes to estimate energy costs and usage for heating, cooling, appliances and other end uses information critical to meeting future energy demand and improving efficiency and building design.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowl...
Using an electronic compass to determine telemetry azimuths
Cox, R.R.; Scalf, J.D.; Jamison, B.E.; Lutz, R.S.
2002-01-01
Researchers typically collect azimuths from known locations to estimate locations of radiomarked animals. Mobile, vehicle-mounted telemetry receiving systems frequently are used to gather azimuth data. Use of mobile systems typically involves estimating the vehicle's orientation to grid north (vehicle azimuth), recording an azimuth to the transmitter relative to the vehicle azimuth from a fixed rosette around the antenna mast (relative azimuth), and subsequently calculating an azimuth to the transmitter (animal azimuth). We incorporated electronic compasses into standard null-peak antenna systems by mounting the compass sensors atop the antenna masts and evaluated the precision of this configuration. This system increased efficiency by eliminating vehicle orientation and calculations to determine animal azimuths and produced estimates of precision (azimuth SD=2.6 deg., SE=0.16 deg.) similar to systems that required orienting the mobile system to grid north. Using an electronic compass increased efficiency without sacrificing precision and should produce more accurate estimates of locations when marked animals are moving or when vehicle orientation is problematic.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
NASA Astrophysics Data System (ADS)
Samiksha, S.; Raman, R. S.; Singh, A.
2016-12-01
It is now well recognized that black carbon (a component of aerosols that is similar but not identical to elemental carbon) is an important contributor to global warming, second only to CO2.However, the most popular methods for estimation of black carbon rely on accurate estimates of its mass absorption efficiency (MAE) to convert optical attenuation measurements to black carbon concentrations. Often a constant manufacturer specified MAE is used for this purposes. Recent literature has unequivocally established that MAE shows large spatio-temporal heterogeneities. This is so because MAE depends on emission sources, chemical composition, and mixing state of aerosols. In this study, ambient PM2.5 samples were collected over an ecologically sensitive zone (Van Vihar National Park) in Bhopal, Central India for two years (01 January, 2012 to 31 December, 2013). Samples were collected on Teflon, Nylon, and Tissue quartz filter substrates. Punches of quartz fibre filter were analysed for organic and elemental carbon (OC/EC) by a thermal-optical-transmittance/reflectance (TOT-TOR) analyser operating with a 632 nm laser diode. Teflon filters were also used to interdependently measure PM2.5 attenuation (at 370 nm and 800 nm) by transmissometry. Site-specific mass absorption efficiency (MAE) for elemental carbon over the study site will be derived using a combination of measurements from the TOT/TOR analyser and transmissometer. An assessment of site-specific MAE values, its temporal variability and implications to black carbon radiative forcing will be discussed. It is now well recognized that black carbon (a component of aerosols that is similar but not identical to elemental carbon) is an important contributor to global warming, second only to CO2. However, the most popular methods for estimation of black carbon rely on accurate estimates of its mass absorption efficiency (MAE) to convert optical attenuation measurements to black carbon concentrations. Often a constant manufacturer specified MAE is used for this purposes. Recent literature has unequivocally established that MAE shows large spatio-temporal heterogeneities. This is so because MAE depends on emission sources, chemical composition, and mixing state of aerosols. In this study, ambient PM2.5 samples were collected over an ecologically sensitive zone (Van Vihar National Park) in Bhopal, Central India for two years (01 January, 2012 to 31 December, 2013). Samples were collected on Teflon, Nylon, and Tissue quartz filter substrates. Punches of quartz fibre filter were analysed for organic and elemental carbon (OC/EC) by a thermal-optical-transmittance/reflectance (TOT-TOR) analyser operating with a 632 nm laser diode. Teflon filters were also used to interdependently measure PM2.5 attenuation (at 370 nm and 800 nm) by transmissometry. Site-specific mass absorption efficiency (MAE) for elemental carbon over the study site will be derived using a combination of measurements from the TOT/TOR analyser and transmissometer. An assessment of site-specific MAE values, its temporal variability and implications to black carbon radiative forcing will be discussed.
2012-01-01
Background Data collection for economic evaluation alongside clinical trials is burdensome and cost-intensive. Limiting both the frequency of data collection and recall periods can solve the problem. As a consequence, gaps in survey periods arise and must be filled appropriately. The aims of our study are to assess the validity of incomplete cost data collection and define suitable resource categories. Methods In the randomised KORINNA study, cost data from 234 elderly patients were collected quarterly over a 1-year period. Different strategies for incomplete data collection were compared with complete data collection. The sample size calculation was modified in response to elasticity of variance. Results Resource categories suitable for incomplete data collection were physiotherapy, ambulatory clinic in hospital, medication, consultations, outpatient nursing service and paid household help. Cost estimation from complete and incomplete data collection showed no difference when omitting information from one quarter. When omitting information from two quarters, costs were underestimated by 3.9% to 4.6%. With respect to the observed increased standard deviation, a larger sample size would be required, increased by 3%. Nevertheless, more time was saved than extra time would be required for additional patients. Conclusion Cost data can be collected efficiently by reducing the frequency of data collection. This can be achieved by incomplete data collection for shortened periods or complete data collection by extending recall windows. In our analysis, cost estimates per year for ambulatory healthcare and non-healthcare services in terms of three data collections was as valid and accurate as a four complete data collections. In contrast, data on hospitalisation, rehabilitation stays and care insurance benefits should be collected for the entire target period, using extended recall windows. When applying the method of incomplete data collection, sample size calculation has to be modified because of the increased standard deviation. This approach is suitable to enable economic evaluation with lower costs to both study participants and investigators. Trial registration The trial registration number is ISRCTN02893746 PMID:22978572
Optimal estimates of free energies from multistate nonequilibrium work data.
Maragakis, Paul; Spichty, Martin; Karplus, Martin
2006-03-17
We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.
Journal: Efficient Hydrologic Tracer-Test Design for Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed to facilitate the design of tracer tests by root determination of the one-dimensional advection-dispersion equation (ADE) using a preset average tracer concentration which provides a theoretical basis for an estimate of necessary tracer mass. The method uses basic measured field parameters (e.g., discharge, distance, cross-sectional area) that are combined in functional relatipnships that descrive solute-transport processes related to flow velocity and time of travel. These initial estimates for time of travel and velocity are then applied to a hypothetical continuous stirred tank reactor (CSTR) as an analog for the hydrological-flow system to develop initial estimates for tracer concentration, tracer mass, and axial dispersion. Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be necessary for descri
Tracer-Test Planning Using the Efficient Hydrologic Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be
EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to
Integrating remote sensing, geographic information system and modeling for estimating crop yield
NASA Astrophysics Data System (ADS)
Salazar, Luis Alonso
This thesis explores various aspects of the use of remote sensing, geographic information system and digital signal processing technologies for broad-scale estimation of crop yield in Kansas. Recent dry and drought years in the Great Plains have emphasized the need for new sources of timely, objective and quantitative information on crop conditions. Crop growth monitoring and yield estimation can provide important information for government agencies, commodity traders and producers in planning harvest, storage, transportation and marketing activities. The sooner this information is available the lower the economic risk translating into greater efficiency and increased return on investments. Weather data is normally used when crop yield is forecasted. Such information, to provide adequate detail for effective predictions, is typically feasible only on small research sites due to expensive and time-consuming collections. In order for crop assessment systems to be economical, more efficient methods for data collection and analysis are necessary. The purpose of this research is to use satellite data which provides 50 times more spatial information about the environment than the weather station network in a short amount of time at a relatively low cost. Specifically, we are going to use Advanced Very High Resolution Radiometer (AVHRR) based vegetation health (VH) indices as proxies for characterization of weather conditions.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Mathiassen, Svend Erik; Liv, Per; Wahlström, Jens
2012-01-01
In ergonomics, assessing the working postures of an individual by observation is a very common practice. The present study investigated whether monetary resources devoted to an observational study should preferably be invested in collecting many video recordings of the work, or in having several observers estimate postures from available videos multiple times. On the basis of a data set of observed working postures among hairdressers, necessary information in terms of posture variability, observer variability, and costs for recording and observing videos was entered into equations providing the total cost of data collection and the precision (informative value) of the resulting estimates of two variables: percentages time with the arm elevated <15 degrees and >90 degrees. In all 160 data collection strategies, differing with respect to the number of video recordings and the number of repeated observations of each recording, were simulated and compared for cost and precision. For both posture variables, the most cost-efficient strategy for a given budget was to engage 4 observers to look at available video recordings, rather than to have one observer look at more recordings. Since the latter strategy is the more common in ergonomics practice, we recommend reconsidering standard practice in observational posture assessment.
Age and gender estimation using Region-SIFT and multi-layered SVM
NASA Astrophysics Data System (ADS)
Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun
2018-04-01
In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.
Tracer constraints on organic particle transfer efficiency to the deep ocean
NASA Astrophysics Data System (ADS)
Weber, T. S.; Cram, J. A.; Deutsch, C. A.
2016-02-01
The "transfer efficiency" of sinking organic particles through the mesopelagic zone is a critical determinant of ocean carbon sequestration timescales, and the atmosphere-ocean partition of CO2. Our ability to detect large-scale variations in transfer efficiency is limited by the paucity of particle flux data from the deep ocean, and the potential biases of bottom-moored sediment traps used to collect it. Here we show that deep-ocean particle fluxes can be reconstructed by diagnosing the rate of phosphate accumulation and oxygen disappearance along deep circulation pathways in an observationally constrained Ocean General Circulation Model (OGCM). Combined with satellite and model estimates of carbon export from the surface ocean, these diagnosed fluxes reveal a global pattern of transfer efficiency to 1000m and 2000m that is high ( 20%) at high latitudes and negligible (<5%) throughout subtropical gyres, with intermediate values in the tropics. This pattern is at odds with previous estimates of deep transfer efficiency derived from bottom-moored sediment traps, but is consistent with upper-ocean flux profiles measured by neutrally buoyant sediment traps, which show strong attenuation of low latitude particle fluxes over the top 500m. Mechanistically, the pattern can be explained by spatial variations in particle size distributions, and the temperature-dependence of remineralization. We demonstrate the biogeochemical significance of our findings by comparing estimates of deep-ocean carbon sequestration in a scenario with spatially varying transfer efficiency to one with a globally uniform "Martin-curve" particle flux profile.
Is bigger better? An empirical analysis of waste management in New South Wales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carvalho, Pedro, E-mail: pedrotcc@gmail.com; CESUR – Center for Urban and Regional Systems, Instituto Superior Técnico, University of Lisboa, Av. Rovisco Pais, 1049-001 Lisbon; Marques, Rui Cunha, E-mail: rui.marques@tecnico.ulisboa.pt
Highlights: • We search for the most efficient cost structure for NSW household waste services. • We found that larger services are no longer efficient. • We found an optimal size for the range 12,000–20,000 inhabitants. • We found significant economies of output density for household waste collection. • We found economies of scope in joint provision of unsorted and recycling services. - Abstract: Across the world, rising demand for municipal solid waste services has seen an ongoing increase in the costs of providing these services. Moreover, municipal waste services have typically been provided through natural or legal monopolies, wheremore » few incentives exist to reduce costs. It is thus vital to examine empirically the cost structure of these services in order to develop effective public policies which can make these services more cost efficient. Accordingly, this paper considers economies of size and economies of output density in the municipal waste collection sector in the New South Wales (NSW) local government system in an effort to identify the optimal size of utilities from the perspective of cost efficiency. Our results show that – as presently constituted – NSW municipal waste services are not efficient in terms of costs, thereby demonstrating that ‘bigger is not better.’ The optimal size of waste utilities is estimated to fall in the range 12,000–20,000 inhabitants. However, significant economies of output density for unsorted (residual) municipal waste collection and recycling waste collection were found, which means it is advantageous to increase the amount of waste collected, but maintaining constant the number of customers and the intervention area.« less
76 FR 33768 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... inability to pay energy bills; (3) increase the efficiency of energy usage by low-income families, helping... hours Total burden respondents respondent per response hours REACH Model Plan 51 1 72 3,672 Estimated Total Annual Burden Hours: 3,672. In compliance with the requirements of Section 3506(c)(2)(A) of the...
Worku, Yohannes; Muchie, Mammo
2012-01-01
Objective. The objective was to investigate factors that affect the efficient management of solid waste produced by commercial businesses operating in the city of Pretoria, South Africa. Methods. Data was gathered from 1,034 businesses. Efficiency in solid waste management was assessed by using a structural time-based model designed for evaluating efficiency as a function of the length of time required to manage waste. Data analysis was performed using statistical procedures such as frequency tables, Pearson's chi-square tests of association, and binary logistic regression analysis. Odds ratios estimated from logistic regression analysis were used for identifying key factors that affect efficiency in the proper disposal of waste. Results. The study showed that 857 of the 1,034 businesses selected for the study (83%) were found to be efficient enough with regards to the proper collection and disposal of solid waste. Based on odds ratios estimated from binary logistic regression analysis, efficiency in the proper management of solid waste was significantly influenced by 4 predictor variables. These 4 influential predictor variables are lack of adherence to waste management regulations, wrong perception, failure to provide customers with enough trash cans, and operation of businesses by employed managers, in a decreasing order of importance. PMID:23209483
Laforest, Brandon J; Winegardner, Amanda K; Zaheer, Omar A; Jeffery, Nicholas W; Boyle, Elizabeth E; Adamowicz, Sarah J
2013-04-04
Biodiversity surveys have long depended on traditional methods of taxonomy to inform sampling protocols and to determine when a representative sample of a given species pool of interest has been obtained. Questions remain as to how to design appropriate sampling efforts to accurately estimate total biodiversity. Here we consider the biodiversity of freshwater ostracods (crustacean class Ostracoda) from the region of Churchill, Manitoba, Canada. Through an analysis of observed species richness and complementarity, accumulation curves, and richness estimators, we conduct an a posteriori analysis of five bioblitz-style collection strategies that differed in terms of total duration, number of sites, protocol flexibility to heterogeneous habitats, sorting of specimens for analysis, and primary purpose of collection. We used DNA barcoding to group specimens into molecular operational taxonomic units for comparison. Forty-eight provisional species were identified through genetic divergences, up from the 30 species previously known and documented in literature from the Churchill region. We found differential sampling efficiency among the five strategies, with liberal sorting of specimens for molecular analysis, protocol flexibility (and particularly a focus on covering diverse microhabitats), and a taxon-specific focus to collection having strong influences on garnering more accurate species richness estimates. Our findings have implications for the successful design of future biodiversity surveys and citizen-science collection projects, which are becoming increasingly popular and have been shown to produce reliable results for a variety of taxa despite relying on largely untrained collectors. We propose that efficiency of biodiversity surveys can be increased by non-experts deliberately selecting diverse microhabitats; by conducting two rounds of molecular analysis, with the numbers of samples processed during round two informed by the singleton prevalence during round one; and by having sub-teams (even if all non-experts) focus on select taxa. Our study also provides new insights into subarctic diversity of freshwater Ostracoda and contributes to the broader "Barcoding Biotas" campaign at Churchill. Finally, we comment on the associated implications and future research directions for community ecology analyses and biodiversity surveys through DNA barcoding, which we show here to be an efficient technique enabling rapid biodiversity quantification in understudied taxa.
An efficiency-decay model for Lumen maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.
Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less
An efficiency-decay model for Lumen maintenance
Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.; ...
2016-08-25
Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
Lietz, Henrike; Lingani, Moustapha; Sié, Ali; Sauerborn, Rainer; Souares, Aurelia; Tozan, Yesim
2015-01-01
Background There are more than 40 Health and Demographic Surveillance System (HDSS) sites in 19 different countries. The running costs of HDSS sites are high. The financing of HDSS activities is of major importance, and adding external health surveys to the HDSS is challenging. To investigate the ways of improving data quality and collection efficiency in the Nouna HDSS in Burkina Faso, the stand-alone data collection activities of the HDSS and the Household Morbidity Survey (HMS) were integrated, and the paper-based questionnaires were consolidated into a single tablet-based questionnaire, the Comprehensive Disease Assessment (CDA). Objective The aims of this study are to estimate and compare the implementation costs of the two different survey approaches for measuring population health. Design All financial costs of stand-alone (HDSS and HMS) and integrated (CDA) surveys were estimated from the perspective of the implementing agency. Fixed and variable costs of survey implementation and key cost drivers were identified. The costs per household visit were calculated for both survey approaches. Results While fixed costs of survey implementation were similar for the two survey approaches, there were considerable variations in variable costs, resulting in an estimated annual cost saving of about US$45,000 under the integrated survey approach. This was primarily because the costs of data management for the tablet-based CDA survey were considerably lower than for the paper-based stand-alone surveys. The cost per household visit from the integrated survey approach was US$21 compared with US$25 from the stand-alone surveys for collecting the same amount of information from 10,000 HDSS households. Conclusions The CDA tablet-based survey method appears to be feasible and efficient for collecting health and demographic data in the Nouna HDSS in rural Burkina Faso. The possibility of using the tablet-based data collection platform to improve the quality of population health data requires further exploration. PMID:26257048
Efficiency analysis of betavoltaic elements
NASA Astrophysics Data System (ADS)
Sachenko, A. V.; Shkrebtii, A. I.; Korkishko, R. M.; Kostylyov, V. P.; Kulish, M. R.; Sokolovskyi, I. O.
2015-09-01
The conversion of energy of electrons produced by a radioactive β-source into electricity in a Si and SiC p- n junctions is modeled. The features of the generation function that describes the electron-hole pair production by an electron flux and the emergence of a "dead layer" are discussed. The collection efficiency Q that describes the rate of electron-hole pair production by incident beta particles, is calculated taking into account the presence of the dead layer. It is shown that in the case of high-grade Si p- n junctions, the collection efficiency of electron-hole pairs created by a high-energy electrons flux (such as, e.g., Pm-147 beta flux) is close or equal to unity in a wide range of electron energies. For SiC p-n junctions, Q is near unity only for electrons with relatively low energies of about 5 keV (produced, e.g., by a tritium source) and decreases rapidly with further increase of electron energy. The conditions, under which the influence of the dead layer on the collection efficiency is negligible, are determined. The open-circuit voltage is calculated for realistic values of the minority carriers' diffusion coefficients and lifetimes in Si and SiC p- n junctions, irradiated by a high-energy electrons flux. Our calculations allow to estimate the attainable efficiency of betavoltaic elements.
CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.
Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J
2017-04-01
Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.
NASA Astrophysics Data System (ADS)
Neher, Christopher; Duffield, John; Patterson, David
2013-09-01
The National Park Service (NPS) currently manages a large and diverse system of park units nationwide which received an estimated 279 million recreational visits in 2011. This article uses park visitor data collected by the NPS Visitor Services Project to estimate a consistent set of count data travel cost models of park visitor willingness to pay (WTP). Models were estimated using 58 different park unit survey datasets. WTP estimates for these 58 park surveys were used within a meta-regression analysis model to predict average and total WTP for NPS recreational visitation system-wide. Estimated WTP per NPS visit in 2011 averaged 102 system-wide, and ranged across park units from 67 to 288. Total 2011 visitor WTP for the NPS system is estimated at 28.5 billion with a 95% confidence interval of 19.7-43.1 billion. The estimation of a meta-regression model using consistently collected data and identical specification of visitor WTP models greatly reduces problems common to meta-regression models, including sample selection bias, primary data heterogeneity, and heteroskedasticity, as well as some aspects of panel effects. The article provides the first estimate of total annual NPS visitor WTP within the literature directly based on NPS visitor survey data.
Linna, Miika; Häkkinen, Unto; Peltola, Mikko; Magnussen, Jon; Anthun, Kjartan S; Kittelsen, Sverre; Roed, Annette; Olsen, Kim; Medin, Emma; Rehnberg, Clas
2010-12-01
The aim of this study was to compare the performance of hospital care in four Nordic countries: Norway, Finland, Sweden and Denmark. Using national discharge registries and cost data from hospitals, cost efficiency in the production of somatic hospital care was calculated for public hospitals. Data were collected using harmonized definitions of inputs and outputs for 184 hospitals and data envelopment analysis was used to calculate Farrell efficiency estimates for the year 2002. Results suggest that there were marked differences in the average hospital efficiency between Nordic countries. In 2002, average efficiency was markedly higher in Finland compared to Norway and Sweden. This study found differences in cost efficiency that cannot be explained by input prices or differences in coding practices. More analysis is needed to reveal the causes of large efficiency disparities between Nordic hospitals.
Technical efficiency of public district hospitals and health centres in Ghana: a pilot study
Osei, Daniel; d'Almeida, Selassi; George, Melvill O; Kirigia, Joses M; Mensah, Ayayi Omar; Kainyu, Lenity H
2005-01-01
Background The Government of Ghana has been implementing various health sector reforms (e.g. user fees in public health facilities, decentralization, sector-wide approaches to donor coordination) in a bid to improve efficiency in health care. However, to date, except for the pilot study reported in this paper, no attempt has been made to make an estimate of the efficiency of hospitals and/or health centres in Ghana. The objectives of this study, based on data collected in 2000, were: (i) to estimate the relative technical efficiency (TE) and scale efficiency (SE) of a sample of public hospitals and health centres in Ghana; and (ii) to demonstrate policy implications for health sector policy-makers. Methods The Data Envelopment Analysis (DEA) approach was used to estimate the efficiency of 17 district hospitals and 17 health centres. This was an exploratory study. Results Eight (47%) hospitals were technically inefficient, with an average TE score of 61% and a standard deviation (STD) of 12%. Ten (59%) hospitals were scale inefficient, manifesting an average SE of 81% (STD = 25%). Out of the 17 health centres, 3 (18%) were technically inefficient, with a mean TE score of 49% (STD = 27%). Eight health centres (47%) were scale inefficient, with an average SE score of 84% (STD = 16%). Conclusion This pilot study demonstrated to policy-makers the versatility of DEA in measuring inefficiencies among individual facilities and inputs. There is a need for the Planning and Budgeting Unit of the Ghana Health Services to continually monitor the productivity growth, allocative efficiency and technical efficiency of all its health facilities (hospitals and health centres) in the course of the implementation of health sector reforms. PMID:16188021
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
Hagler, James R; Thompson, Alison L; Stefanek, Melissa A; Machtley, Scott A
2018-03-01
A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the arthropod collections. The videos produced by the B-MACs were then analyzed with behavioral event recording software to quantify various aspects of the sampling process. The sampler's speed and the number of sampling sweeps or vacuum suctions taken over a fixed distance (12.2 m) of cotton were two of the more significant sampling characteristics quantified for each method. The arthropod counts obtained, combined with the analyses of the videos, enabled us to estimate arthropod sampling efficiency for each technique based on fixed distance, time, and sample unit measurements. Data revealed that the vacuuming was the most precise method for collecting arthropods in the relatively small cotton research plots. However, data also indicates that the sweepnet method would be more efficient for collecting most of the cotton-dwelling arthropod taxa, especially if the sampler could continuously sweep for at least 1 min or ≥80 m (e.g., in larger research plots). The B-MACs are inexpensive and non-cumbersome, the video images generated are outstanding, and they can be archived to provide permanent documentation of a research project. The methods described here could be useful for other types of field-based research to enhance data collection efficiency.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Lawson, L G; Bruun, J; Coelli, T; Agger, J F; Lund, M
2004-01-01
Relationships of various reproductive disorders and milk production performance of Danish dairy farms were investigated. A stochastic frontier production function was estimated using data collected in 1998 from 514 Danish dairy farms. Measures of farm-level milk production efficiency relative to this production frontier were obtained, and relationships between milk production efficiency and the incidence risk of reproductive disorders were examined. There were moderate positive relationships between milk production efficiency and retained placenta, induction of estrus, uterine infections, ovarian cysts, and induction of birth. Inclusion of reproductive management variables showed that these moderate relationships disappeared, but directions of coefficients for almost all those variables remained the same. Dystocia showed a weak negative correlation with milk production efficiency. Farms that were mainly managed by young farmers had the highest average efficiency scores. The estimated milk losses due to inefficiency averaged 1142, 488, and 256 kg of energy-corrected milk per cow, respectively, for low-, medium-, and high-efficiency herds. It is concluded that the availability of younger cows, which enabled farmers to replace cows with reproductive disorders, contributed to high cow productivity in efficient farms. Thus, a high replacement rate more than compensates for the possible negative effect of reproductive disorders. The use of frontier production and efficiency/inefficiency functions to analyze herd data may enable dairy advisors to identify inefficient herds and to simulate the effect of alternative management procedures on the individual herd's efficiency.
Develop Efficient Leak Proof M1 Abrams Plenum Seal
2014-05-07
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c... 1 Background ................................................................................................................. 6 1.1 Problem
2017-12-04
public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...Project Contribution: International Collaboration: International Travel : National Academy Member: N Person Months Worked: 6.00 Funding...Support: Project Contribution: International Collaboration: International Travel : National Academy Member: N Participant Type
Genetic background in partitioning of metabolizable energy efficiency in dairy cows.
Mehtiö, T; Negussie, E; Mäntysaari, P; Mäntysaari, E A; Lidauer, M H
2018-05-01
The main objective of this study was to assess the genetic differences in metabolizable energy efficiency and efficiency in partitioning metabolizable energy in different pathways: maintenance, milk production, and growth in primiparous dairy cows. Repeatability models for residual energy intake (REI) and metabolizable energy intake (MEI) were compared and the genetic and permanent environmental variations in MEI were partitioned into its energy sinks using random regression models. We proposed 2 new feed efficiency traits: metabolizable energy efficiency (MEE), which is formed by modeling MEI fitting regressions on energy sinks [metabolic body weight (BW 0.75 ), energy-corrected milk, body weight gain, and body weight loss] directly; and partial MEE (pMEE), where the model for MEE is extended with regressions on energy sinks nested within additive genetic and permanent environmental effects. The data used were collected from Luke's experimental farms Rehtijärvi and Minkiö between 1998 and 2014. There were altogether 12,350 weekly MEI records on 495 primiparous Nordic Red dairy cows from wk 2 to 40 of lactation. Heritability estimates for REI and MEE were moderate, 0.33 and 0.26, respectively. The estimate of the residual variance was smaller for MEE than for REI, indicating that analyzing weekly MEI observations simultaneously with energy sinks is preferable. Model validation based on Akaike's information criterion showed that pMEE models fitted the data even better and also resulted in smaller residual variance estimates. However, models that included random regression on BW 0.75 converged slowly. The resulting genetic standard deviation estimate from the pMEE coefficient for milk production was 0.75 MJ of MEI/kg of energy-corrected milk. The derived partial heritabilities for energy efficiency in maintenance, milk production, and growth were 0.02, 0.06, and 0.04, respectively, indicating that some genetic variation may exist in the efficiency of using metabolizable energy for different pathways in dairy cows. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jen, M; Yan, F; Tseng, Y
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less
Kwon, Hyok Chon; Na, Doosu; Ko, Byung Geun; Lee, Songjun
2008-01-01
Wireless sensor networks have been studied in the area of intelligent transportation systems, disaster perception, environment monitoring, ubiquitous healthcare, home network, and so on. For the ubiquitous healthcare, the previous systems collect the sensed health related data at portable devices without regard to correlations of various biological signals to determine the health conditions. It is not the energy-efficient method to gather a lot of information into a specific node to decide the health condition. Since the biological signals are related with each other to estimate certain body condition, it is necessary to be collected selectively by their relationship for energy efficiency of the networked nodes. One of researches about low power consumption is the reduction of the amount of packet transmission. In this paper, a health monitoring system, which allows the transmission of the reduced number of packets by means of setting the routing path considered the relations of biological signals, is proposed.
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Volt-VAR Optimization on American Electric Power Feeders in Northeast Columbus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Kevin P.; Weaver, T. F.
2012-05-10
In 2007 American Electric Power launched the gridSMART® initiative with the goals of increasing efficiency of the electricity delivery system and improving service to the end-use customers. As part of the initiative, a coordinated Volt-VAR system was deployed on eleven distribution feeders at five substations in the Northeast Columbus Ohio Area. The goal of the coordinated Volt-VAR system was to decrease the amount of energy necessary to provide end-use customers with the same quality of service. The evaluation of the Volt-VAR system performance was conducted in two stages. The first stage was composed of simulation, analysis, and estimation, while themore » second stage was composed of analyzing collected field data. This panel paper will examine the analysis conducted in both stages and present the estimated improvements in system efficiency.« less
White, Robin R; McGill, Tyler; Garnett, Rebecca; Patterson, Robert J; Hanigan, Mark D
2017-04-01
The objective of this work was to evaluate the precision and accuracy of the milk yield predictions made by the PREP10 model in comparison to those from the National Research Council (NRC) Nutrient Requirements of Dairy Cattle. The PREP10 model is a ration-balancing system that allows protein use efficiency to vary with production level. The model also has advanced AA supply and requirement calculations that enable estimation of AA-allowable milk (Milk AA ) based on 10 essential AA. A literature data set of 374 treatment means was collected and used to quantitatively evaluate the estimates of protein-allowable milk (Milk MP ) and energy-allowable milk yields from the NRC and PREP10 models. The PREP10 Milk AA prediction was also evaluated, as were both models' estimates of milk based on the most-limiting nutrient or the mean of the estimated milk yields. For most milk estimates compared, the PREP10 model had reduced root mean squared prediction error (RMSPE), improved concordance correlation coefficient, and reduced mean and slope bias in comparison to the NRC model. In particular, utilizing the variable protein use efficiency for milk production notably improved the estimate of Milk MP when compared with NRC. The PREP10 Milk MP estimate had an RMSPE of 18.2% (NRC = 25.7%), concordance correlation coefficient of 0.82% (NRC = 0.64), slope bias of -0.14 kg/kg of predicted milk (NRC = -0.34 kg/kg), and mean bias of -0.63 kg (NRC = -2.85 kg). The PREP10 estimate of Milk AA had slightly elevated RMSPE and mean and slope bias when compared with Milk MP . The PREP10 estimate of Milk AA was not advantageous when compared with Milk MP , likely because AA use efficiency for milk was constant whereas MP use was variable. Future work evaluating variable AA use efficiencies for milk production is likely to improve accuracy and precision of models of allowable milk. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Army Requirement to Acquire Individual Carbine Not Justified
2013-09-16
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY...of the Department of Defense that: supports the warfighter; promotes accountability, integrity , and efficiency; advises the Secretary of Defense and
Efficient and Robust Signal Approximations
2009-05-01
otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse
Estimating Arrhenius parameters using temperature programmed molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight variousmore » aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.« less
Ratkovic, Branislava; Andrejic, Milan; Vidovic, Milorad
2012-06-01
In 2007, the Serbian Ministry of Health initiated specific activities towards establishing a workable model based on the existing administrative framework, which corresponds to the needs of healthcare waste management throughout Serbia. The objective of this research was to identify the reforms carried out and their outcomes by estimating the efficiencies of a sample of 35 healthcare facilities engaged in the process of collection and treatment of healthcare waste, using data envelopment analysis. Twenty-one (60%) of the 35 healthcare facilities analysed were found to be technically inefficient, with an average level of inefficiency of 13%. This fact indicates deficiencies in the process of collection and treatment of healthcare waste and the information obtained and presented in this paper could be used for further improvement and development of healthcare waste management in Serbia.
Journal: A Review of Some Tracer-Test Design Equations for ...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-
Zee, Jarcy; Xie, Sharon X.
2015-01-01
Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca; Rasmussen, Roy; Thériault, Julie
2016-04-01
Despite its importance, accurate measurements of precipitation remains a challenge. Measurement errors for solid precipitation, which are often ignored for automated systems, frequently range from 20% to 70% due to undercatch in windy conditions. While solid precipitation measurements have been the subject of many studies, there have been only a limited number of numerical modeling efforts to estimate the collection efficiency of solid precipitation gauges when exposed to the wind, in both shielded and unshielded configurations. The available models use CFD simulations of the airflow pattern generated by the aerodynamic response of the gauge/shield geometry to perform the Lagrangian tracking of solid precipitation particles (Thériault et al., 2012; Colli et al. 2016a and 2016b). Validation of the results against field observations yields similarities in the overall behavior, but the model output only approximately reproduces the dependence of the experimental collection efficiency on wind speed. We present recent developments of such a modelling approach including various gauge/shield configurations, the influence of the drag coefficient calculation on the model performance, and the role of the particle size distribution in explaining the scatter of the collection efficiency observed at any particular wind speed (Colli et al. 2015). Comparison with observations at the Marshall (CO) field test site is used to validate results of the various modelling schemes and to support the analysis of the microphysical characteristics of ice crystals. References: Colli, M., Rasmussen, R.M., Thèriault, J.M., Lanza, L.G., Baker, B.C. and J. Kochendorfer (2015). An improved trajectory model to evaluate the collection performance of snow gauges. J.Appl.Meteor.Climatol., 54(8), pages 1826-1836. Colli, M., Lanza, L.G., Rasmussen, R.M. and J.M. Thèriault (2016a). The collection efficiency of shielded and unshielded precipitation gauges. Part I: CFD airflow modelling. J. of Hydrometeorol., 17(1), pages 231-243. Colli, M., Lanza, L.G., Rasmussen, R.M. and J.M. Thèriault (2016b). The collection efficiency of shielded and unshielded precipitation gauges. Part II: modelling particle trajectories. J. of Hydrometeorol., 17(1), 245-255. Thériault, J. M., R. Rasmussen, K. Ikeda, and S. Landolt, (2012). Dependence of snow gauge collection efficiency on snowflake characteristics. J. Appl. Meteor. Climatol., 51, 745-762.
Addleman, Shane; Chouyyok, Wilaiwan; Palo, Daniel; Dunn, Brad M.; Brann, Michelle; Billingsley, Gary; Johnson, Darren; Nell, Kara M.
2017-05-25
This work evaluates, develops and demonstrates flexible, scalable mineral extraction technology for geothermal brines based upon solid phase sorbent materials with a specific focus upon rare earth elements (REEs). The selected organic and inorganic sorbent materials demonstrated high performance for collection of trace REEs, precious and valuable metals beyond commercially available sorbents. This report details the organic and inorganic sorbent uptake, performance, and collection efficiency results for La, Eu, Ho, Ag, Cu and Zn, as well as the characterization of these select sorbent materials. The report also contains estimated costs from an in-depth techno-economic analysis of a scaled up separation process. The estimated financial payback period for installing this equipment varies between 3.3 to 5.7 years depending on the brine flow rate of the geothermal resource.
Advanced Performance Modeling with Combined Passive and Active Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dovrolis, Constantine; Sim, Alex
2015-04-15
To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Evaluating the efficiency of a one-square-meter quadrat sampler for riffle-dwelling fish
Peterson, J.T.; Rabeni, C.F.
2001-01-01
We evaluated the efficacy of a 1-m2 quadrat sampler for collecting riffle-dwelling fishes in an Ozark stream. We used a dual-gear approach to evaluate sampler efficiency in relation to species, fish size, and habitat variables. Quasi-likelihood regression showed sampling efficiency to differ significantly (P 0.05). Sampling efficiency was significantly influenced by physical habitat characteristics. Mean current velocity negatively influenced sampling efficiencies for Cyprinidae (P = 0.009), Cottidae (P = 0.006), and Percidae (P < 0.001), and the amount of cobble substrate negatively influenced sampling efficiencies for Cyprinidae (P = 0.025), Ictaluridae (P < 0.001), and Percidae (P < 0.001). Water temperature negatively influenced sampling efficiency for Cyprinidae (P = 0.009) and Ictaluridae (P = 0.006). Species-richness efficiency was positively influenced (P = 0.002) by percentage of riffle sampled. Under average habitat conditions encountered in stream riffles, the 1-m2 quadrat sampler was most efficient at estimating the densities of Cyprinidae (84%) and Cottidae (80%) and least efficient for Percidae (57%) and Ictaluridae (31%).
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
Resource costing for multinational neurologic clinical trials: methods and results.
Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H
1998-11-01
We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
THIEMANN, TARA; NELMS, BRITTANY; REISEN, WILLIAM K.
2011-01-01
Vegetation patterns and the presence of large numbers of nesting herons and egrets significantly altered the number of host-seeking Culex tarsalis Coquillett (Diptera: Culicidae) collected at dry ice-baited traps. The numbers of females collected per trap night at traps along the ecotone of Eucalyptus stands with and without a heron colony were always greater or equal to numbers collected at traps within or under canopy. No Cx. tarsalis were collected within or under Eucaplytus canopy during the peak heron nesting season, even though these birds frequently were infected with West Nile virus and large number of engorged females could be collected at resting boxes. These data indicate a diversion of host-seeking females from traps to nesting birds reducing sampling efficiency. PMID:21661310
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Estimating scaled treatment effects with multiple outcomes.
Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita
2017-01-01
In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.
Justification of Estimates for Fiscal Year 1984 Submitted to Congress.
1983-01-01
sponsoring different aspects related to unique manufacturing methods than those pursued by DARPA, and duplication of effort is prevented by direct...weapons systems. Rapid and economical methods of satisfying these requirements must significantly precede weapons systems developments to prevent... methods for obtaining accurate and efficient geodetic measurements. Also, a major advanced sensor/G&G data collection capability is being urdertaken by DNA
Novel and Efficient Synthesis of the Promising Drug Candidate Discodermolide
2010-02-01
stereotriad building blocks for discodermolide and related polyketide antibiotics could be obtained from variations on a short, scalable scheme that did...chains required for the chemical synthesis of the nonaromatic polyketides is usually based on the iterative lengthening of an acyclic substituted chain...burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense
NASA Astrophysics Data System (ADS)
Blaes, X.; Lambert, M.-J.; Chome, G.; Traore, P. S.; de By, R. A.; Defourny, P.
2016-08-01
Efficient yield mapping in Sudano-Sahelian Africa, characterized by a very heterogeneous landscape, is crucial to help ensure food security and decrease smallholder farmers' vulnerability. Thanks to an unprecedented in-situ data and HR and VHR remote sensing time series collected in the Koutiala district (in south-eastern Mali), the yield and some key factors of yield estimation were estimated. A crop-specific biomass map was derived with a mean absolute error of 20% using metric WorldView and 25% using decametric SPOT-5 TAKE5 image time series. The very high intra- and inter-field heterogeneity was captured efficiently. The presence of trees in the fields led to a general overestimation of yields, while the mixed pixels at the field borders introduced noise in the biomass predictions.
Bay Ridge Gardens - Mixed-Humid Affordable Multifamily Housing Deep Energy Retrofit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, J.; Moore, M.; Thompson, M.
2013-08-01
Under this project, Newport Partners (as part of the BA-PIRC research team) evaluated the installation, measured performance, and cost-effectiveness of efficiency upgrade measures for a tenant-in-place DER at the Bay Ridge multifamily (MF) development in Annapolis, Maryland. The design and construction phase of the Bay Ridge project was completed in August 2012. This report summarizes system commissioning, short-term test results, utility bill data analysis, and analysis of real-time data collected over a one-year period after the retrofit was complete. The Bay Ridge project is comprised of a 'base scope' retrofit which was estimated to achieve a 30%+ savings (relative tomore » pre-retrofit) on 186 apartments, and a 'DER scope' which was estimated to achieve 50% savings (relative to pre-retrofit) on a 12-unit building. The base scope was applied to the entire apartment complex, except for one 12-unit building which underwent the DER scope. A wide range of efficiency measures was applied to pursue this savings target for the DER building, including improvements/replacements of mechanical equipment and distribution systems, appliances, lighting and lighting controls, the building envelope, hot water conservation measures, and resident education. The results of this research build upon the current body of knowledge of multifamily retrofits. Towards this end, the research team has collected and generated data on the selection of measures, their estimated performance, their measured performance, and risk factors and their impact on potential measures.« less
Precision Viticulture from Multitemporal, Multispectral Very High Resolution Satellite Data
NASA Astrophysics Data System (ADS)
Kandylakis, Z.; Karantzalos, K.
2016-06-01
In order to exploit efficiently very high resolution satellite multispectral data for precision agriculture applications, validated methodologies should be established which link the observed reflectance spectra with certain crop/plant/fruit biophysical and biochemical quality parameters. To this end, based on concurrent satellite and field campaigns during the veraison period, satellite and in-situ data were collected, along with several grape samples, at specific locations during the harvesting period. These data were collected for a period of three years in two viticultural areas in Northern Greece. After the required data pre-processing, canopy reflectance observations, through the combination of several vegetation indices were correlated with the quantitative results from the grape/must analysis of grape sampling. Results appear quite promising, indicating that certain key quality parameters (like brix levels, total phenolic content, brix to total acidity, anthocyanin levels) which describe the oenological potential, phenolic composition and chromatic characteristics can be efficiently estimated from the satellite data.
Miniature open channel scrubbers for gas collection.
Toda, Kei; Koga, Tomoko; Tanaka, Toshinori; Ohira, Shin-Ichi; Berg, Jordan M; Dasgupta, Purnendu K
2010-10-15
An open channel scrubber is proposed as a miniature fieldable gas collector. The device is 100mm in length, 26 mm in width and 22 mm in thickness. The channel bottom is rendered hydrophilic and liquid flows as a thin layer on the bottom. Air sample flows atop the appropriately chosen flowing liquid film and analyte molecules are absorbed into the liquid. There is no membrane at the air-liquid interface: they contact directly each other. Analyte species collected over a 10 min interval are determined by fluorometric flow analysis or ion chromatography. A calculation algorithm was developed to estimate the collection efficiency a priori; experimental and simulated results agreed well. The characteristics of the open channel scrubber are discussed in this paper from both theoretical and experimental points of view. In addition to superior collection efficiencies at relatively high sample air flow rates, this geometry is particularly attractive that there is no change in collection performance due to membrane fouling. We demonstrate field use for analysis of ambient SO(2) near an active volcano. This is basic investigation of membraneless miniature scrubber and is expected to lead development of an excellent micro-gas analysis system integrated with a detector for continuous measurements. Copyright © 2010 Elsevier B.V. All rights reserved.
Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna
2016-01-01
Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. PMID:26803655
Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna
2016-02-01
Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modeling technical efficiency of inshore fishery using data envelopment analysis
NASA Astrophysics Data System (ADS)
Rahman, Rahayu; Zahid, Zalina; Khairi, Siti Shaliza Mohd; Hussin, Siti Aida Sheikh
2016-10-01
Fishery industry contributes significantly to the economy of Malaysia. This study utilized Data Envelopment Analysis application in estimating the technical efficiency of fishery in Terengganu, a state on the eastern coast of Peninsular Malaysia, based on multiple output, i.e. total fish landing and income of fishermen with six inputs, i.e. engine power, vessel size, number of trips, number of workers, cost and operation distance. The data were collected by survey conducted between November and December 2014. The decision making units (DMUs) involved 100 fishermen from 10 fishery areas. The result showed that the technical efficiency in Season I (dry season) and Season II (rainy season) were 90.2% and 66.7% respectively. About 27% of the fishermen were rated to be efficient during Season I, meanwhile only 13% of the fishermen achieved full efficiency 100% during Season II. The results also found out that there was a significance difference in the efficiency performance between the fishery areas.
Lopez, Gerardo; Pallas, Benoît; Martinez, Sébastien; Lauri, Pierre-Éric; Regnard, Jean-Luc; Durel, Charles-Éric; Costes, Evelyne
2015-01-01
Water use efficiency (WUE) is a quantitative measurement which improvement is a major issue in the context of global warming and restrictions in water availability for agriculture. In this study, we aimed at studying the variation and genetic control of WUE and the respective role of its components (plant biomass and transpiration) in a perennial fruit crop. We explored an INRA apple core collection grown in a phenotyping platform to screen one-year-old scions for their accumulated biomass, transpiration and WUE under optimal growing conditions. Plant biomass was decompose into morphological components related to either growth or organ expansion. For each trait, nine mixed models were evaluated to account for the genetic effect and spatial heterogeneity inside the platform. The Best Linear Unbiased Predictors of genetic values were estimated after model selection. Mean broad-sense heritabilities were calculated from variance estimates. Heritability values indicated that biomass (0.76) and WUE (0.73) were under genetic control. This genetic control was lower in plant transpiration with an heritability of 0.54. Across the collection, biomass accounted for 70% of the WUE variability. A Hierarchical Ascendant Classification of the core collection indicated the existence of six groups of genotypes with contrasting morphology and WUE. Differences between morphotypes were interpreted as resulting from differences in the main processes responsible for plant growth: cell division leading to the generation of new organs and cell elongation leading to organ dimension. Although further studies will be necessary on mature trees with more complex architecture and multiple sinks such as fruits, this study is a first step for improving apple plant material for the use of water.
Lopez, Gerardo; Pallas, Benoît; Martinez, Sébastien; Lauri, Pierre-Éric; Regnard, Jean-Luc; Durel, Charles-Éric; Costes, Evelyne
2015-01-01
Water use efficiency (WUE) is a quantitative measurement which improvement is a major issue in the context of global warming and restrictions in water availability for agriculture. In this study, we aimed at studying the variation and genetic control of WUE and the respective role of its components (plant biomass and transpiration) in a perennial fruit crop. We explored an INRA apple core collection grown in a phenotyping platform to screen one-year-old scions for their accumulated biomass, transpiration and WUE under optimal growing conditions. Plant biomass was decompose into morphological components related to either growth or organ expansion. For each trait, nine mixed models were evaluated to account for the genetic effect and spatial heterogeneity inside the platform. The Best Linear Unbiased Predictors of genetic values were estimated after model selection. Mean broad-sense heritabilities were calculated from variance estimates. Heritability values indicated that biomass (0.76) and WUE (0.73) were under genetic control. This genetic control was lower in plant transpiration with an heritability of 0.54. Across the collection, biomass accounted for 70% of the WUE variability. A Hierarchical Ascendant Classification of the core collection indicated the existence of six groups of genotypes with contrasting morphology and WUE. Differences between morphotypes were interpreted as resulting from differences in the main processes responsible for plant growth: cell division leading to the generation of new organs and cell elongation leading to organ dimension. Although further studies will be necessary on mature trees with more complex architecture and multiple sinks such as fruits, this study is a first step for improving apple plant material for the use of water. PMID:26717192
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
The Loss of Efficiency Caused by Agents’ Uncoordinated Routing in Transport Networks
Wang, Junjie; Wang, Pu
2014-01-01
Large-scale daily commuting data were combined with detailed geographical information system (GIS) data to analyze the loss of transport efficiency caused by drivers’ uncoordinated routing in urban road networks. We used Price of Anarchy (POA) to quantify the loss of transport efficiency and found that both volume and distribution of human mobility demand determine the POA. In order to reduce POA, a small number of highways require considerable decreases in traffic, and their neighboring arterial roads need to attract more traffic. The magnitude of the adjustment in traffic flow can be estimated using the fundamental measure traffic flow only, which is widely available and easy to collect. Surprisingly, the most congested roads or the roads with largest traffic flow were not those requiring the most reduction of traffic. This study can offer guidance for the optimal control of urban traffic and facilitate improvements in the efficiency of transport networks. PMID:25349995
Mleczek, Mirosław; Siwulski, Marek; Mikołajczak, Patrycja; Gąsecka, Monika; Rissmann, Iwona; Goliński, Piotr; Sobieralski, Krzysztof
2015-01-01
The aim of the study was to estimate copper (Cu) accumulation efficiency in whole-fruiting bodies of 18 edible and non-edible wild growing mushrooms collected from 27 places in the Wielkopolska Voivodeship. Mushrooms were collected each time from the same places to estimate the diversity in Cu accumulation between tested mushroom species within 3 consecutive years of study (2011-2013). The study results revealed various accumulation of Cu in the whole-tested mushroom fruiting bodies. The highest mean accumulation of Cu was observed in Macrolepiota procera (119.4 ± 20.0 mg kg(-1) dm), while the lowest was in Suillus luteus and Russula fellea fruiting bodies (16.1 ± 3.0 and 18.8 ± 4.6 mg kg(-1) dm, respectively). Significant differences in Cu accumulation between mushroom species collected in 2011 and in the two following years (2012 and 2013) were observed. The results indicated that sporadic consumption of these mushrooms was not related to excessive intake of Cu for the human body (no toxic influence on health).
A Functional Varying-Coefficient Single-Index Model for Functional Response Data
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2016-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540
A Functional Varying-Coefficient Single-Index Model for Functional Response Data.
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2017-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.
Economics of immunization information systems in the United States: assessing costs and efficiency.
Bartlett, Diana L; Molinari, Noelle-Angelique M; Ortega-Sanchez, Ismael R; Urquhart, Gary A
2006-08-22
One of the United States' national health objectives for 2010 is that 95% of children aged <6 years participate in fully operational population-based immunization information systems (IIS). Despite important progress, child participation in most IIS has increased slowly, in part due to limited economic knowledge about IIS operations. Should IIS need further improvement, characterizing costs and identifying factors that affect IIS efficiency become crucial. Data were collected from a national sampling frame of the 56 states/cities that received federal immunization grants under U.S. Public Health Service Act 317b and completed the federal 1999 Immunization Registry Annual Report. The sampling frame was stratified by IIS functional status, children's enrollment in the IIS, and whether the IIS had been developed as an independent system or was integrated into a larger system. These sites self-reported IIS developmental and operational program costs for calendar years 1998-2002 using a standardized data collection tool and underwent on-site interviews to verify reported data with information from the state/city financial management system and other financial records. A parametric cost-per-patient-record (CPR) model was estimated. The model assessed the impact of labor and non-labor resources used in development and operations tasks, as well as the impact of information technology, local providers' participation and compliance with federal IIS performance standards (e.g., ensuring the confidentiality and security of information, ensure timely vaccination data at the time of patient encounter, and produce official immunization records). Given the number of records minimizing CPR, the additional amount of resources needed to meet national health goals for the year 2010 was also calculated. Estimated CPR was as high as $10.30 and as low as $0.09 in operating IIS. About 20% of IIS had between 2.9 to 3.2 million records and showed CPR estimates of $0.09. Overall, CPR was highly sensitive to local providers' participation. To achieve the 2010 goals, additional aggregated costs were estimated to be $75.6 million nationwide. Efficiently increasing the number of records in IIS would require additional resources and careful consideration of various strategies to minimize CPR, such as boosting providers' participation.
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Technical and scale efficiency of public community hospitals in Eritrea: an exploratory study
2013-01-01
Background Eritrean gross national income of Int$610 per capita is lower than the average for Africa (Int$1620) and considerably lower than the global average (Int$6977). It is therefore imperative that the country’s resources, including those specifically allocated to the health sector, are put to optimal use. The objectives of this study were (a) to estimate the relative technical and scale efficiency of public secondary level community hospitals in Eritrea, based on data generated in 2007, (b) to estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient, and (c) to estimate using Tobit regression analysis the impact of institutional and contextual/environmental variables on hospital inefficiencies. Methods A two-stage Data Envelopment Analysis (DEA) method is used to estimate efficiency of hospitals and to explain the inefficiencies. In the first stage, the efficient frontier and the hospital-level efficiency scores are first estimated using DEA. In the second stage, the estimated DEA efficiency scores are regressed on some institutional and contextual/environmental variables using a Tobit model. In 2007 there were a total of 20 secondary public community hospitals in Eritrea, nineteen of which generated data that could be included in the study. The input and output data were obtained from the Ministry of Health (MOH) annual health service activity report of 2007. Since our study employs data that are five years old, the results are not meant to uncritically inform current decision-making processes, but rather to illustrate the potential value of such efficiency analyses. Results The key findings were as follows: (i) the average constant returns to scale technical efficiency score was 90.3%; (ii) the average variable returns to scale technical efficiency score was 96.9%; and (iii) the average scale efficiency score was 93.3%. In 2007, the inefficient hospitals could have become more efficient by either increasing their outputs by 20,611 outpatient visits and 1,806 hospital discharges, or by transferring the excess 2.478 doctors (2.85%), 9.914 nurses and midwives (0.98%), 9.774 laboratory technicians (9.68%), and 195 beds (10.42%) to primary care facilities such as health centres, health stations, and maternal and child health clinics. In the Tobit regression analysis, the coefficient for OPDIPD (outpatient visits as a proportion of inpatient days) had a negative sign, and was statistically significant; and the coefficient for ALOS (average length of stay) had a positive sign, and was statistically significant at 5% level of significance. Conclusions The findings from the first-stage analysis imply that 68% hospitals were variable returns to scale technically efficient; and only 42% hospitals achieved scale efficiency. On average, inefficient hospitals could have increased their outpatient visits by 5.05% and hospital discharges by 3.42% using the same resources. Our second-stage analysis shows that the ratio of outpatient visits to inpatient days and average length of inpatient stay are significantly correlated with hospital inefficiencies. This study shows that routinely collected hospital data in Eritrea can be used to identify relatively inefficient hospitals as well as the sources of their inefficiencies. PMID:23497525
Quantifying the isotopic composition of NOx emission sources: An analysis of collection methods
NASA Astrophysics Data System (ADS)
Fibiger, D.; Hastings, M.
2012-04-01
We analyze various collection methods for nitrogen oxides, NOx (NO2 and NO), used to evaluate the nitrogen isotopic composition (δ15N). Atmospheric NOx is a major contributor to acid rain deposition upon its conversion to nitric acid; it also plays a significant role in determining air quality through the production of tropospheric ozone. NOx is released by both anthropogenic (fossil fuel combustion, biomass burning, aircraft emissions) and natural (lightning, biogenic production in soils) sources. Global concentrations of NOx are rising because of increased anthropogenic emissions, while natural source emissions also contribute significantly to the global NOx burden. The contributions of both natural and anthropogenic sources and their considerable variability in space and time make it difficult to attribute local NOx concentrations (and, thus, nitric acid) to a particular source. Several recent studies suggest that variability in the isotopic composition of nitric acid deposition is related to variability in the isotopic signatures of NOx emission sources. Nevertheless, the isotopic composition of most NOx sources has not been thoroughly constrained. Ultimately, the direct capture and quantification of the nitrogen isotopic signatures of NOx sources will allow for the tracing of NOx emissions sources and their impact on environmental quality. Moreover, this will provide a new means by which to verify emissions estimates and atmospheric models. We present laboratory results of methods used for capturing NOx from air into solution. A variety of methods have been used in field studies, but no independent laboratory verification of the efficiencies of these methods has been performed. When analyzing isotopic composition, it is important that NOx be collected quantitatively or the possibility of fractionation must be constrained. We have found that collection efficiency can vary widely under different conditions in the laboratory and fractionation does not vary predictably with collection efficiency. For example, prior measurements frequently utilized triethanolamine solution for collecting NOx, but the collection efficiency was found to drop quickly as the solution aged. The most promising method tested is a NaOH/KMnO4 solution (Margeson and Knoll, Anal. Chem., 1985) which can collect NOx quantitatively from the air. Laboratory tests of previously used methods, along with progress toward creating a suitable and verifiable field deployable collection method will be presented.
Wang, Xuan; Luo, Hongye; Qin, Xianjin; Feng, Jun; Gao, Hongda; Feng, Qiming
2016-08-23
As the core of the county-level Maternal and Child Health Hospitals (MCHH) in rural areas of China, the service efficiency affects the fairness and availability of healthcare services. This study aims to identify the determinants of hospital efficiency and explore how to improve the performance of MCHH in terms of productivity and efficiency. Data was collected from a sample of 32 county-level MCHHs of Guangxi in 2014. Firstly, we specified and measured the indicators of the inputs and outputs which represent hospital resources expended and its profiles respectively. Then we estimated the efficiency scores using Data Envelopment Analysis (DEA) for each hospital. Efficiency scores were decomposed into technical, scale and congestion components, and the potential output increases and/or input reductions were also estimated in this model, which would make relatively inefficient hospitals more efficient. In the second stage, the estimated efficiency scores are regressed against hospital external and internal environment factors using a Tobit model. We used DEAP (V2.1) and R for data analysis. The average scores of technical efficiency, net technical efficiency (managerial efficiency) and scale efficiency of the hospitals were 0.875, 0.922 and 0.945, respectively. Half of the hospitals were efficient, and 9.4 % and 40.6 % were weakly efficient and inefficient, respectively. Among the low-productiveness hospitals, 61.1 % came from poor counties (Poor counties in this article are in the list of key poverty-stricken counties at the national level, published by The State Council Leading Group Office of Poverty Alleviation and Development, 2012). The total input indicated that redundant medical resources in poverty areas were significantly higher than those in non-poverty areas. The Tobit regression model showed that the technical efficiency was proportional to the total annual incomes, the number of discharge patients, and the number of outpatient and emergency visits, while it was inversely proportional to total expenditure and the actual number of open beds. Technical efficiency was not associated with number of health care workers. The overall operational efficiency of the county-level MCHHs in Guangxi was low and needs to be improved. Regional economic differences affect the performances of hospitals. Health administrations should adjust and optimize the resource investments for the different areas. For the hospitals in poverty areas, policy-makers should not only consider the hardware facilities investment, but also the introduction of advanced techniques and high-level medical personnel to improve their technical efficiency.
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
Zipkin, Elise F; Sillett, T Scott; Grant, Evan H Campbell; Chandler, Richard B; Royle, J Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales. PMID:24634726
Commercial Building Partners Catalyze Energy Efficient Buildings Across the Nation
2012-08-01
PNNL ) with companies starting in 2008 and discusses some partner insights from projects joining the program later. In 2008, PNNL and the National...provides an overview of the CBP effort and the variety of buildings and partners currently participating with PNNL . Many of the projects are now...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response
2016-12-01
i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...Unclassified 20. LIMITATION OF ABSTRACT UU NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 ii THIS PAGE...Table 23. Form Factor Ranking Calculations ..........................................................265 Table 24. Cost Ranking Calculation
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-10-01
Under this project, Newport Partners (as part of the BA-PIRC research team) evaluated the installation, measured performance, and cost-effectiveness of efficiency upgrade measures for a tenant-in-place DER at the Bay Ridge multifamily (MF) development in Annapolis, Maryland. The design and construction phase of the Bay Ridge project was completed in August 2012. This report summarizes system commissioning, short-term test results, utility bill data analysis, and analysis of real-time data collected over a one-year period after the retrofit was complete. The Bay Ridge project is comprised of a "base scope" retrofit which was estimated to achieve a 30%+ savings (relative tomore » pre-retrofit) on 186 apartments, and a "DER scope" which was estimated to achieve 50% savings (relative to pre-retrofit) on a 12-unit building. The base scope was applied to the entire apartment complex, except for one 12-unit building which underwent the DER scope. A wide range of efficiency measures was applied to pursue this savings target for the DER building, including improvements/replacements of mechanical equipment and distribution systems, appliances, lighting and lighting controls, the building envelope, hot water conservation measures, and resident education. The results of this research build upon the current body of knowledge of multifamily retrofits. Towards this end, the research team has collected and generated data on the selection of measures, their estimated performance, their measured performance, and risk factors and their impact on potential measures.« less
Longitudinal analyses of correlated response efficiencies of fillet traits in Nile tilapia.
Turra, E M; Fernandes, A F A; de Alvarenga, E R; Teixeira, E A; Alves, G F O; Manduca, L G; Murphy, T W; Silva, M A
2018-03-01
Recent studies with Nile tilapia have shown divergent results regarding the possibility of selecting on morphometric measurements to promote indirect genetic gains in fillet yield (FY). The use of indirect selection for fillet traits is important as these traits are only measurable after harvesting. Random regression models are a powerful tool in association studies to identify the best time point to measure and select animals. Random regression models can also be applied in a multiple trait approach to analyze indirect response to selection, which would avoid the need to sacrifice candidate fish. Therefore, the aim of this study was to investigate the genetic relationships between several body measurements, weight and fillet traits throughout the growth period and to evaluate the possibility of indirect selection for fillet traits in Nile tilapia. Data were collected from 2042 fish and was divided into two subsets. The first subset was used to estimate genetic parameters, including the permanent environmental effect for BW and body measurements (8758 records for each body measurement, as each fish was individually weighed and measured a maximum of six times). The second subset (2042 records for each trait) was used to estimate genetic correlations and heritabilities, which enabled the calculation of correlated response efficiencies between body measurements and the fillet traits. Heritability estimates across ages ranged from 0.05 to 0.5 for height, 0.02 to 0.48 for corrected length (CL), 0.05 to 0.68 for width, 0.08 to 0.57 for fillet weight (FW) and 0.12 to 0.42 for FY. All genetic correlation estimates between body measurements and FW were positive and strong (0.64 to 0.98). The estimates of genetic correlation between body measurements and FY were positive (except for CL at some ages), but weak to moderate (-0.08 to 0.68). These estimates resulted in strong and favorable correlated response efficiencies for FW and positive, but moderate for FY. These results indicate the possibility of achieving indirect genetic gains for FW and by selecting for morphometric traits, but low efficiency for FY when compared with direct selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, A.; Repac, B.; Gonder, J.
This poster presents initial estimates of the net energy impacts of automated vehicles (AVs). Automated vehicle technologies are increasingly recognized as having potential to decrease carbon dioxide emissions and petroleum consumption through mechanisms such as improved efficiency, better routing, lower traffic congestion, and by enabling advanced technologies. However, some effects of AVs could conceivably increase fuel consumption through possible effects such as longer distances traveled, increased use of transportation by underserved groups, and increased travel speeds. The net effect on petroleum use and climate change is still uncertain. To make an aggregate system estimate, we first collect best estimates formore » the energy impacts of approximately ten effects of AVs. We then use a modified Kaya Identity approach to estimate the range of aggregate effects and avoid double counting. We find that depending on numerous factors, there is a wide range of potential energy impacts. Adoption of automated personal or shared vehicles can lead to significant fuel savings but has potential for backfire.« less
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
Food Safety Informatics: A Public Health Imperative
Tucker, Cynthia A.; Larkin, Stephanie N.; Akers, Timothy A.
2011-01-01
To date, little has been written about the implementation of utilizing food safety informatics as a technological tool to protect consumers, in real-time, against foodborne illnesses. Food safety outbreaks have become a major public health problem, causing an estimated 48 million illnesses, 128,000 hospitalizations, and 3,000 deaths in the U.S. each year. Yet, government inspectors/regulators that monitor foodservice operations struggle with how to collect, organize, and analyze data; implement, monitor, and enforce safe food systems. Currently, standardized technologies have not been implemented to efficiently establish “near-in-time” or “just-in-time” electronic awareness to enhance early detection of public health threats regarding food safety. To address the potential impact of collection, organization and analyses of data in a foodservice operation, a wireless food safety informatics (FSI) tool was pilot tested at a university student foodservice center. The technological platform in this test collected data every six minutes over a 24 hour period, across two primary domains: time and temperatures within freezers, walk-in refrigerators and dry storage areas. The results of this pilot study briefly illustrated how technology can assist in food safety surveillance and monitoring by efficiently detecting food safety abnormalities related to time and temperatures so that efficient and proper response in “real time” can be addressed to prevent potential foodborne illnesses. PMID:23569605
Brown, G S; Betty, R G; Brockmann, J E; Lucero, D A; Souza, C A; Walsh, K S; Boucher, R M; Tezak, M S; Wilson, M C; Rudolph, T; Lindquist, H D A; Martinez, K F
2007-10-01
To evaluate US Centers for Disease Control and Prevention recommended swab surface sample collection method for recovery efficiency and limit of detection for powdered Bacillus spores from nonporous surfaces. Stainless steel and painted wallboard surface coupons were seeded with dry aerosolized Bacillus atrophaeus spores and surface concentrations determined. The observed mean rayon swab recovery efficiency from stainless steel was 0.41 with a standard deviation (SD) of +/-0.17 and for painted wallboard was 0.41 with an SD of +/-0.23. Evaluation of a sonication extraction method for the rayon swabs produced a mean extraction efficiency of 0.76 with an SD of +/-0.12. Swab recovery quantitative limits of detection were estimated at 25 colony forming units (CFU) per sample area for both stainless steel and painted wallboard. The swab sample collection method may be appropriate for small area sampling (10 -25 cm2) with a high agent concentration, but has limited value for large surface areas with a low agent concentration. The results of this study provide information necessary for the interpretation of swab environmental sample collection data, that is, positive swab samples are indicative of high surface concentrations and may imply a potential for exposure, whereas negative swab samples do not assure that organisms are absent from the surfaces sampled and may not assure the absence of the potential for exposure. It is critical from a public health perspective that the information obtained is accurate and reproducible. The consequence of an inappropriate public health response founded on information gathered using an ineffective or unreliable sample collection method has the potential for undesired social and economic impact.
Estimation of end of life mobile phones generation: the case study of the Czech Republic.
Polák, Miloš; Drápalová, Lenka
2012-08-01
The volume of waste electrical and electronic equipment (WEEE) has been rapidly growing in recent years. In the European Union (EU), legislation promoting the collection and recycling of WEEE has been in force since the year 2003. Yet, both current and recently suggested collection targets for WEEE are completely ineffective when it comes to collection and recycling of small WEEE (s-WEEE), with mobile phones as a typical example. Mobile phones are the most sold EEE and at the same time one of appliances with the lowest collection rate. To improve this situation, it is necessary to assess the amount of generated end of life (EoL) mobile phones as precisely as possible. This paper presents a method of assessment of EoL mobile phones generation based on delay model. Within the scope of this paper, the method has been applied on the Czech Republic data. However, this method can be applied also to other EoL appliances in or outside the Czech Republic. Our results show that the average total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. We impute long lifespan particularly to a storage time of EoL mobile phones at households, estimated to be 4.35 years. In the years 1990-2000, only 45 thousands of EoL mobile phones were generated in the Czech Republic, while in the years 2000-2010 the number grew to 6.5 million pieces and it is estimated that in the years 2010-2020 about 26.3 million pieces will be generated. Current European legislation sets targets on collection and recycling of WEEE in general, but no specific collection target for EoL mobile phone exists. In the year 2010 only about 3-6% of Czech EoL mobile phones were collected for recovery and recycling. If we make similar estimation using an estimated average EU value, then within the next 10 years about 1.3 billion of EoL mobile phones would be available for recycling in the EU. This amount contains about 31 tonnes of gold and 325 tonnes of silver. Since Europe is dependent on import of many raw materials, efficient recycling of EoL products could help reduce this dependence. To set a working system of collection, it will be necessary to set new and realistic collection targets. Copyright © 2012 Elsevier Ltd. All rights reserved.
Improving collection efficiency through remote monitoring of charity assets.
McLeod, Fraser; Erdogan, Gunes; Cherrett, Tom; Bektas, Tolga; Davies, Nigel; Shingleton, Duncan; Speed, Chris; Dickinson, Janet; Norgate, Sarah
2014-02-01
Collection costs associated with servicing a major UK charity's donation banks and collecting unsold goods from their retail shops can account for up to 20% of the overall income gained. Bank and shop collections are commingled and are typically made on fixed days of the week irrespective of the amounts of materials waiting to be collected. Using collection records from a major UK charity, this paper considers what vehicle routing and scheduling benefits could accrue if bank and shop servicing requirements were monitored, the former using remote sensing technology to allow more proactive collection scheduling. A vehicle routing and scheduling algorithm employing tabu search methods was developed, and suggested time and distance savings of up to 30% over the current fixed schedules when a minimum bank and shop fill level of between 50% and 60% was used as a collection trigger. For the case study investigated, this led to a potential revenue gain of 5% for the charity and estimated CO2 savings of around 0.5 tonnes per week across the fleet of six heterogeneous vehicles. Copyright © 2013 Elsevier Ltd. All rights reserved.
Determinant Factors of Long-Term Performance Development in Young Swimmers.
Morais, Jorge E; Silva, António J; Marinho, Daniel A; Lopes, Vítor P; Barbosa, Tiago M
2017-02-01
To develop a performance predictor model based on swimmers' biomechanical profile, relate the partial contribution of the main predictors with the training program, and analyze the time effect, sex effect, and time × sex interaction. 91 swimmers (44 boys, 12.04 ± 0.81 y; 47 girls, 11.22 ± 0.98 y) evaluated during a 3-y period. The decimal age and anthropometric, kinematic, and efficiency features were collected 10 different times over 3 seasons (ie, longitudinal research). Hierarchical linear modeling was the procedure used to estimate the performance predictors. Performance improved between season 1 early and season 3 late for both sexes (boys 26.9% [20.88;32.96], girls 16.1% [10.34;22.54]). Decimal age (estimate [EST] -2.05, P < .001), arm span (EST -0.59, P < .001), stroke length (EST 3.82; P = .002), and propelling efficiency (EST -0.17, P = .001) were entered in the final model. Over 3 consecutive seasons young swimmers' performance improved. Performance is a multifactorial phenomenon where anthropometrics, kinematics, and efficiency were the main determinants. The change of these factors over time was coupled with the training plans of this talent identification and development program.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Schaeffer, Jeff; Rogers, Mark W.; Fielder, David G.; Godby, Neal; Bowen, Anjanette K.; O'Connor, Lisa; Parrish, Josh; Greenwood, Susan; Chong, Stephen; Wright, Greg
2014-01-01
Long-term surveys are useful in understanding trends in connecting channel fish communities; a gill net assessment in the Saint Marys River performed periodically since 1975 is the most comprehensive connecting channels sampling program within the Laurentian Great Lakes. We assessed efficiency of that survey, with intent to inform development of assessments at other connecting channels. We evaluated trends in community composition, effort versus estimates of species richness, ability to detect abundance changes for four species, and effects of subsampling yellow perch catches on size and age-structure metrics. Efficiency analysis revealed low power to detect changes in species abundance, whereas reduced effort could be considered to index species richness. Subsampling simulations indicated that subsampling would have allowed reliable estimates of yellow perch (Perca flavescens) population structure, while greatly reducing the number of fish that were assigned ages. Analyses of statistical power and efficiency of current sampling protocols are useful for managers collecting and using these types of data as well as for the development of new monitoring programs. Our approach provides insight into whether survey goals and objectives were being attained and can help evaluate ability of surveys to answer novel questions that arise as management strategies are refined.
Estimation of end of life mobile phones generation: The case study of the Czech Republic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polak, Milos, E-mail: mpolak@remasystem.cz; Drapalova, Lenka
Highlights: Black-Right-Pointing-Pointer In this paper, we define lifespan of mobile phones and estimate their average total lifespan. Black-Right-Pointing-Pointer The estimation of lifespan distribution is based on large sample of EoL mobile phones. Black-Right-Pointing-Pointer Total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. Black-Right-Pointing-Pointer In the years 2010-20, about 26.3 million pieces of EoL mobile phones will be generated in the Czech Republic. - Abstract: The volume of waste electrical and electronic equipment (WEEE) has been rapidly growing in recent years. In the European Union (EU), legislation promoting the collection and recycling of WEEE has been in forcemore » since the year 2003. Yet, both current and recently suggested collection targets for WEEE are completely ineffective when it comes to collection and recycling of small WEEE (s-WEEE), with mobile phones as a typical example. Mobile phones are the most sold EEE and at the same time one of appliances with the lowest collection rate. To improve this situation, it is necessary to assess the amount of generated end of life (EoL) mobile phones as precisely as possible. This paper presents a method of assessment of EoL mobile phones generation based on delay model. Within the scope of this paper, the method has been applied on the Czech Republic data. However, this method can be applied also to other EoL appliances in or outside the Czech Republic. Our results show that the average total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. We impute long lifespan particularly to a storage time of EoL mobile phones at households, estimated to be 4.35 years. In the years 1990-2000, only 45 thousands of EoL mobile phones were generated in the Czech Republic, while in the years 2000-2010 the number grew to 6.5 million pieces and it is estimated that in the years 2010-2020 about 26.3 million pieces will be generated. Current European legislation sets targets on collection and recycling of WEEE in general, but no specific collection target for EoL mobile phone exists. In the year 2010 only about 3-6% of Czech EoL mobile phones were collected for recovery and recycling. If we make similar estimation using an estimated average EU value, then within the next 10 years about 1.3 billion of EoL mobile phones would be available for recycling in the EU. This amount contains about 31 tonnes of gold and 325 tonnes of silver. Since Europe is dependent on import of many raw materials, efficient recycling of EoL products could help reduce this dependence. To set a working system of collection, it will be necessary to set new and realistic collection targets.« less
Non-invasive genetic censusing and monitoring of primate populations.
Arandjelovic, Mimi; Vigilant, Linda
2018-03-01
Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.
Ice Accretion Modeling using an Eulerian Approach for Droplet Impingement
NASA Technical Reports Server (NTRS)
Kim, Joe Woong; Garza, Dennis P.; Sankar, Lakshmi N.; Kreeger, Richard E.
2012-01-01
A three-dimensional Eulerian analysis has been developed for modeling droplet impingement on lifting bodes. The Eulerian model solves the conservation equations of mass and momentum to obtain the droplet flow field properties on the same mesh used in CFD simulations. For complex configurations such as a full rotorcraft, the Eulerian approach is more efficient because the Lagrangian approach would require a significant amount of seeding for accurate estimates of collection efficiency. Simulations are done for various benchmark cases such as NACA0012 airfoil, MS317 airfoil and oscillating SC2110 airfoil to illustrate its use. The present results are compared with results from the Lagrangian approach used in an industry standard analysis called LEWICE.
Stated Choice design comparison in a developing country: recall and attribute nonattendance
2014-01-01
Background Experimental designs constitute a vital component of all Stated Choice (aka discrete choice experiment) studies. However, there exists limited empirical evaluation of the statistical benefits of Stated Choice (SC) experimental designs that employ non-zero prior estimates in constructing non-orthogonal constrained designs. This paper statistically compares the performance of contrasting SC experimental designs. In so doing, the effect of respondent literacy on patterns of Attribute non-Attendance (ANA) across fractional factorial orthogonal and efficient designs is also evaluated. The study uses a ‘real’ SC design to model consumer choice of primary health care providers in rural north India. A total of 623 respondents were sampled across four villages in Uttar Pradesh, India. Methods Comparison of orthogonal and efficient SC experimental designs is based on several measures. Appropriate comparison of each design’s respective efficiency measure is made using D-error results. Standardised Akaike Information Criteria are compared between designs and across recall periods. Comparisons control for stated and inferred ANA. Coefficient and standard error estimates are also compared. Results The added complexity of the efficient SC design, theorised elsewhere, is reflected in higher estimated amounts of ANA among illiterate respondents. However, controlling for ANA using stated and inferred methods consistently shows that the efficient design performs statistically better. Modelling SC data from the orthogonal and efficient design shows that model-fit of the efficient design outperform the orthogonal design when using a 14-day recall period. The performance of the orthogonal design, with respect to standardised AIC model-fit, is better when longer recall periods of 30-days, 6-months and 12-months are used. Conclusions The effect of the efficient design’s cognitive demand is apparent among literate and illiterate respondents, although, more pronounced among illiterate respondents. This study empirically confirms that relaxing the orthogonality constraint of SC experimental designs increases the information collected in choice tasks, subject to the accuracy of the non-zero priors in the design and the correct specification of a ‘real’ SC recall period. PMID:25386388
Xenos, P; Yfantopoulos, J; Nektarios, M; Polyzos, N; Tinios, P; Constantopoulos, A
2017-01-01
This study is an initial effort to examine the dynamics of efficiency and productivity in Greek public hospitals during the first phase of the crisis 2009-2012. Data were collected by the Ministry of Health after several quality controls ensuring comparability and validity of hospital inputs and outputs. Productivity is estimated using the Malmquist Indicator, decomposing the estimated values into efficiency and technological change. Hospital efficiency and productivity growth are calculated by bootstrapping the non-parametric Malmquist analysis. The advantage of this method is the estimation efficiency and productivity through the corresponding confidence intervals. Additionally, a Random-effects Tobit model is explored to investigate the impact of contextual factors on the magnitude of efficiency. Findings reveal substantial variations in hospital productivity over the period from 2009 to 2012. The economic crisis of 2009 had a negative impact in productivity. The average Malmquist Productivity Indicator (MPI) score is 0.72 with unity signifying stable production. Approximately 91% of the hospitals score lower than unity. Substantial increase is observed between 2010 and 2011, as indicated by the average MPI score which fluctuates to 1.52. Moreover, technology change scored more than unity in more than 75% of hospitals. The last period (2011-2012) has shown stabilization in the expansionary process of productivity. The main factors contributing to overall productivity gains are increases in occupancy rates, type and size of the hospital. This paper attempts to offer insights in efficiency and productivity growth for public hospitals in Greece. The results suggest that the average hospital experienced substantial productivity growth between 2009 and 2012 as indicated by variations in MPI. Almost all of the productivity increase was due to technology change which could be explained by the concurrent managerial and financing healthcare reforms. Hospitals operating under decreasing returns to scale could achieve higher efficiency rates by reducing their capacity. However, certain social objectives should also be considered. Emphasis perhaps should be placed in utilizing and advancing managerial and organizational reforms, so that the benefits of technological improvements will have a continuing positive impact in the future.
Köck, A; Ledinek, M; Gruber, L; Steininger, F; Fuerst-Waltl, B; Egger-Danner, C
2018-01-01
This study is part of a larger project whose overall objective was to evaluate the possibilities for genetic improvement of efficiency in Austrian dairy cattle. In 2014, a 1-yr data collection was carried out. Data from 6,519 cows kept on 161 farms were recorded. In addition to routinely recorded data (e.g., milk yield, fertility, disease data), data of novel traits [e.g., body weight (BW), body condition score (BCS), lameness score, body measurements] and individual feeding information and feed quality were recorded on each test-day. The specific objective of this study was to estimate genetic parameters for efficiency (related) traits and to investigate their relationships with BCS and lameness in Austrian Fleckvieh, Brown Swiss, and Holstein cows. The following efficiency (related) traits were considered: energy-corrected milk (ECM), BW, dry matter intake (DMI), energy intake (INEL), ratio of milk output to metabolic BW (ECM/BW 0.75 ), ratio of milk output to DMI (ECM/DMI), and ratio of milk energy output to total energy intake (LE/INEL, LE = energy in milk). For Fleckvieh, the heritability estimates of the efficiency (related) traits ranged from 0.11 for LE/INEL to 0.44 for BW. Heritabilities for BCS and lameness were 0.19 and 0.07, respectively. Repeatabilities were high and ranged from 0.30 for LE/INEL to 0.83 for BW. Heritability estimates were generally lower for Brown Swiss and Holstein, but repeatabilities were in the same range as for Fleckvieh. In all 3 breeds, more-efficient cows were found to have a higher milk yield, lower BW, slightly higher DMI, and lower BCS. Higher efficiency was associated with slightly fewer lameness problems, most likely due to the lower BW (especially in Fleckvieh) and higher DMI of the more-efficient cows. Body weight and BCS were positively correlated. Therefore, when selecting for a lower BW, BCS is required as additional information because, otherwise, no distinction between large animals with low BCS and smaller animals with normal BCS would be possible. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An efficient absorbing system for spectrophotometric determination of nitrogen dioxide
NASA Astrophysics Data System (ADS)
Kaveeshwar, Rachana; Amlathe, Sulbha; Gupta, V. K.
A simple and sensitive spectrophotometric method for determination of atmospheric nitrogen dioxide using o-nitroaniline as an efficient absorbing, as well as diazotizing, reagent is described. o-Nitroaniline present in the absorbing medium is diazotized by the absorbed nitrite ion to form diazonium compound. This is later coupled with 1-amino-2-naphthalene sulphonic acid (ANSA) in acidic medium to give red-violet-coloured dye,having λmax = 545 nm. The isoamyl extract of the red azo dye has λmax = 530 nm. The proposed reagents has ≈ 100% collection efficiency and the stoichiometric ratio of NO 2:NO 2- is 0.74. The other important analytical parameters have been investigated. By employing solvent extraction the sensitivity of the reaction was increased and up to 0.03 mg m -3 nitrogen dioxide could be estimated.
An analysis of mobile whole blood collection labor efficiency.
Rose, William N; Dayton, Paula J; Raife, Thomas J
2011-07-01
Labor efficiency is desirable in mobile blood collection. There are few published data on labor efficiency. The variability in the labor efficiency of mobile whole blood collections was analyzed. We determined to improve our labor efficiency using lean manufacturing principles. Workflow changes in mobile collections were implemented with the goal of minimizing labor expenditures. To measure success, data on labor efficiency measured by units/hour/full-time equivalent (FTE) were collected. The labor efficiency in a 6-month period before the implementation of changes, and in months 1 to 6 and 7 to 12 after implementation was analyzed and compared. Labor efficiency in the 6-month period preceding implementation was 1.06 ± 0.4 units collected/hour/FTE. In months 1 to 6, labor efficiency declined slightly to 0.92 ± 0.4 units collected/hour/FTE (p = 0.016 vs. preimplementation). In months 7 to 12, the mean labor efficiency returned to preimplementation levels of 1.09 ±0.4 units collected/hour/FTE. Regression analysis correlating labor efficiency with total units collected per drive revealed a strong correlation (R(2) = 0.48 for the aggregate data from all three periods), indicating that nearly half of labor efficiency was associated with drive size. The lean-based changes in workflow were subjectively favored by employees and donors. The labor efficiency of our mobile whole blood drives is strongly influenced by size. Larger drives are more efficient, with diminishing returns above 40 units collected. Lean-based workflow changes were positively received by employees and donors. © 2011 American Association of Blood Banks.
Estimation of the annual flow and stock of marine debris in South Korea for management purposes.
Jang, Yong Chang; Lee, Jongmyoung; Hong, Sunwook; Mok, Jin Yong; Kim, Kyoung Shin; Lee, Yun Jeong; Choi, Hyun-Woo; Kang, Hongmook; Lee, Sukhui
2014-09-15
The annual flow and stock of marine debris in the Sea of Korea was estimated by summarizing previous survey results and integrating them with other relevant information to underpin the national marine debris management plan. The annual inflow of marine debris was estimated to be 91,195 tons [32,825 tons (36% of the total) from sources on land and 58,370 tons (64%) from ocean sources]. As of the end of 2012, the total stock of marine debris on all South Korean coasts (12,029 tons), the seabed (137,761 tons), and in the water column (2451 tons) was estimated to be 152,241 tons. In 2012, 42,595 tons of marine debris was collected from coasts, seabeds, and the water column. This is a very rare case study that estimated the amount of marine debris at a national level, the results of which provide essential information for the development of efficient marine debris management policies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin
2016-01-01
Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R2 = 0.86–0.93 for 72 data sets collected in the industrial river and R2 = 0.60–0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals’ concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density. PMID:27028017
Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin
2016-01-01
Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R(2) = 0.86-0.93 for 72 data sets collected in the industrial river and R(2) = 0.60-0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals' concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density.
Electron beam induced current in the high injection regime.
Haney, Paul M; Yoon, Heayoung P; Koirala, Prakash; Collins, Robert W; Zhitenev, Nikolai B
2015-07-24
Electron beam induced current (EBIC) is a powerful technique which measures the charge collection efficiency of photovoltaics with sub-micron spatial resolution. The exciting electron beam results in a high generation rate density of electron-hole pairs, which may drive the system into nonlinear regimes. An analytic model is presented which describes the EBIC response when the total electron-hole pair generation rate exceeds the rate at which carriers are extracted by the photovoltaic cell, and charge accumulation and screening occur. The model provides a simple estimate of the onset of the high injection regime in terms of the material resistivity and thickness, and provides a straightforward way to predict the EBIC lineshape in the high injection regime. The model is verified by comparing its predictions to numerical simulations in one- and two-dimensions. Features of the experimental data, such as the magnitude and position of maximum collection efficiency versus electron beam current, are consistent with the three-dimensional model.
Aerodynamic penalties of heavy rain on a landing aircraft
NASA Technical Reports Server (NTRS)
Haines, P. A.; Luers, J. K.
1982-01-01
The aerodynamic penalties of very heavy rain on landing aircraft were investigated. Based on severity and frequency of occurrence, the rainfall rates of 100 mm/hr, 500 mm/hr, and 2000 mm/hr were designated, respectively, as heavy, severe, and incredible. The overall and local collection efficiencies of an aircraft encountering these rains were calculated. The analysis was based on raindrop trajectories in potential flow about an aircraft. All raindrops impinging on the aircraft are assumed to take on its speed. The momentum loss from the rain impact was later used in a landing simulation program. The local collection efficiency was used in estimating the aerodynamic roughness of an aircraft in heavy rain. The drag increase from this roughness was calculated. A number of landing simulations under a fixed stick assumption were done. Serious landing shortfalls were found for either momentum or drag penalties and especially large shortfalls for the combination of both. The latter shortfalls are comparable to those found for severe wind shear conditions.
Karoui, Sofiene; Díaz, Clara; Serrano, Magdalena; Cue, Roger; Celorrio, Idoia; Carabaño, María J
2011-03-01
The fact that results of artificial insemination (AI) are declining in highly selected dairy cattle populations has added a renewed interest to the evaluation of male fertility. Data from 42,348 ejaculates collected from 1990 to 2007 on 502 Holstein bulls were analysed in a Bayesian framework to provide estimates of the evolution of semen traits routinely collected in AI centres throughout the last decades of intense selection for production traits and estimate genetic parameters. The traits under consideration were volume (VOL), concentration (CONC), number of spermatozoa per ejaculate (NESPZ), mass motility score (MM), individual motility (IM), and post-thawing motility (PTM). The environmental factors studied were year-season and week of collection, which account for changes in environmental and technical conditions along time, age at collection, ejaculate order, time from previous collection (TPC) and time between collection and freezing (TCF) (only for PTM). Bull's inbreeding coefficient (Fi), bull's permanent environmental and additive genetic effects were also considered. The use of reduced models was evaluated using the Bayes factor. For all the systematic effects tested, strong or very strong evidence in favour of including the effect in the model was obtained, except for Fi for motility traits and TCF for PTM. No systematic time trends for environment or bull effects were observed, except for PTM, which showed an increasing environmental trend, associated with improvements in freezing-thawing protocols. Heritability estimates were moderate (0.16-0.22), except for IM, which presented a low value (0.07). Genetic correlations among motilities and between motilities and CONC were large and positive [0.38-0.87], VOL showed a negative correlation with CONC (-0.13) but with ample HPD 95%. The magnitude of heritabilities would allow an efficient selection if required and grants the use of these traits as indicators of the sperm viability component of bulls breeding soundness. Copyright © 2011 Elsevier B.V. All rights reserved.
A Form 990 Schedule H conundrum: how much of your bad debt might be charity?
Bailey, Shari; Franklin, David; Hearle, Keith
2010-04-01
IRS Form 990 Schedule H requires hospitals to estimate the amount of bad debt expense attributable to patients eligible for charity under the hospital's charity care policy. Responses to Schedule H, Part III.A.3 open up the entire patient collection process to examination by the IRS, state officials, and the public. Using predictive analytics can help hospitals efficiently identify charity-eligible patients when answering Part III.A.3.
2011-06-10
the ability of IC leaders to think and manage jointly, while not diluting the specialized activities of collectors and analysts working within their...organizational model and doctrine that are increasingly relics of Cold-War threats and thinking . As the military has learned , it takes decades to...burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching existing
Breakthrough during air sampling with polyurethane foam: What do PUF 2/PUF 1 ratios mean?
Bidleman, Terry F; Tysklind, Mats
2018-02-01
Frontal chromatography theory is applied to describe movement of gaseous semivolatile organic compounds (SVOCs) through a column of polyurethane foam (PUF). Collected mass fractions (F C ) are predicted for sample volume/breakthrough volume ratios (τ = V S /V B ) up to 6.0 and PUF bed theoretical plate numbers (N) from 2 to 16. The predictions assume constant air concentrations and temperatures. Extension of the calculations is done to relate the collection efficiency of a 2-PUF train (F C1+2 ) to the PUF 2/PUF 1 ratio. F C1+2 exceeds 0.9 for PUF 2/PUF 1 ≤ 0.5 and lengths of PUF commonly used in air samplers. As the PUF 2/PUF 1 ratio approaches unity, confidence in these predictions is limited by the analytical ability to distinguish residues on the two PUFs. Field data should not be arbitrarily discarded because some analytes broke through to the backup PUF trap. The fractional collection efficiencies can be used to estimate air concentrations from quantities retained on the PUF trap when sampling is not quantitative. Copyright © 2017 Elsevier Ltd. All rights reserved.
On estimation of time-dependent attributable fraction from population-based case-control studies.
Zhao, Wei; Chen, Ying Qing; Hsu, Li
2017-09-01
Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
Reducing sampling error in faecal egg counts from black rhinoceros (Diceros bicornis).
Stringer, Andrew P; Smith, Diane; Kerley, Graham I H; Linklater, Wayne L
2014-04-01
Faecal egg counts (FECs) are commonly used for the non-invasive assessment of parasite load within hosts. Sources of error, however, have been identified in laboratory techniques and sample storage. Here we focus on sampling error. We test whether a delay in sample collection can affect FECs, and estimate the number of samples needed to reliably assess mean parasite abundance within a host population. Two commonly found parasite eggs in black rhinoceros (Diceros bicornis) dung, strongyle-type nematodes and Anoplocephala gigantea, were used. We find that collection of dung from the centre of faecal boluses up to six hours after defecation does not affect FECs. More than nine samples were needed to greatly improve confidence intervals of the estimated mean parasite abundance within a host population. These results should improve the cost-effectiveness and efficiency of sampling regimes, and support the usefulness of FECs when used for the non-invasive assessment of parasite abundance in black rhinoceros populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An Apple IIe microcomputer is being used to collect data and to control a pyrolysis system. Pyrolysis data for bitumen and kerogen are widely used to estimate source rock maturity. For a detailed analysis of kinetic parameters, however, data must be obtained more precisely than for routine pyrolysis. The authors discuss the program which controls the temperature ramp of the furnace that heats the sample, and collects data from a thermocouple in the furnace and from the flame ionization detector measuring evolved hydrocarbons. These data are stored on disk for later use by programs that display the results of themore » experiment or calculate kinetic parameters. The program is written in Applesoft BASIC with subroutines in Apple assembler for speed and efficiency.« less
NASA Astrophysics Data System (ADS)
Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo
2017-06-01
Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Code of Federal Regulations, 2010 CFR
2010-01-01
... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... energy efficiency rating, and of water use rate. (a) Procedures for determining the estimated annual energy consumption, the estimated annual operating costs, the energy efficiency ratings, and the efficacy...
Data from selected U.S. Geological Survey National Stream Water Quality Monitoring Networks
Alexander, Richard B.; Slack, James R.; Ludtke, Amy S.; Fitzgerald, Kathleen K.; Schertz, Terry L.
1998-01-01
A nationally consistent and well-documented collection of water quality and quantity data compiled during the past 30 years for streams and rivers in the United States is now available on CD-ROM and accessible over the World Wide Web. The data include measurements from two U.S. Geological Survey (USGS) national networks for 122 physical, chemical, and biological properties of water collected at 680 monitoring stations from 1962 to 1995, quality assurance information that describes the sample collection agencies, laboratories, analytical methods, and estimates of laboratory measurement error (bias and variance), and information on selected cultural and natural characteristics of the station watersheds. The data are easily accessed via user-supplied software including Web browser, spreadsheet, and word processor, or may be queried and printed according to user-specified criteria using the supplied retrieval software on CD-ROM. The water quality data serve a variety of scientific uses including research and educational applications related to trend detection, flux estimation, investigations of the effects of the natural environment and cultural sources on water quality, and the development of statistical methods for designing efficient monitoring networks and interpreting water resources data.
Amanze, Ogbonna O.; La Hera-Fuentes, Gina; Silverman-Retana, Omar; Contreras-Loya, David; Ashefor, Gregory A.; Ogungbemi, Kayode M.
2018-01-01
Objective We estimated the average annual cost per patient of ART per facility (unit cost) in Nigeria, described the variation in costs across facilities, and identified factors associated with this variation. Methods We used facility-level data of 80 facilities in Nigeria, collected between December 2014 and May 2015. We estimated unit costs at each facility as the ratio of total costs (the sum of costs of staff, recurrent inputs and services, capital, training, laboratory tests, and antiretroviral and TB treatment drugs) divided by the annual number of patients. We applied linear regressions to estimate factors associated with ART cost per patient. Results The unit ART cost in Nigeria was $157 USD nationally and the facility-level mean was $231 USD. The study found a wide variability in unit costs across facilities. Variations in costs were explained by number of patients, level of care, task shifting (shifting tasks from doctors to less specialized staff, mainly nurses, to provide ART) and provider´s competence. The study illuminated the potentially important role that management practices can play in improving the efficiency of ART services. Conclusions Our study identifies characteristics of services associated with the most efficient implementation of ART services in Nigeria. These results will help design efficient program scale-up to deliver comprehensive HIV services in Nigeria by distinguishing features linked to lower unit costs. PMID:29718906
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Real-Time PCR Quantification Using A Variable Reaction Efficiency Model
Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.
2008-01-01
Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886
A general framework for the regression analysis of pooled biomarker assessments.
Liu, Yan; McMahan, Christopher; Gallagher, Colin
2017-07-10
As a cost-efficient data collection mechanism, the process of assaying pooled biospecimens is becoming increasingly common in epidemiological research; for example, pooling has been proposed for the purpose of evaluating the diagnostic efficacy of biological markers (biomarkers). To this end, several authors have proposed techniques that allow for the analysis of continuous pooled biomarker assessments. Regretfully, most of these techniques proceed under restrictive assumptions, are unable to account for the effects of measurement error, and fail to control for confounding variables. These limitations are understandably attributable to the complex structure that is inherent to measurements taken on pooled specimens. Consequently, in order to provide practitioners with the tools necessary to accurately and efficiently analyze pooled biomarker assessments, herein, a general Monte Carlo maximum likelihood-based procedure is presented. The proposed approach allows for the regression analysis of pooled data under practically all parametric models and can be used to directly account for the effects of measurement error. Through simulation, it is shown that the proposed approach can accurately and efficiently estimate all unknown parameters and is more computational efficient than existing techniques. This new methodology is further illustrated using monocyte chemotactic protein-1 data collected by the Collaborative Perinatal Project in an effort to assess the relationship between this chemokine and the risk of miscarriage. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Muleta, Kebede T; Bulli, Peter; Zhang, Zhiwu; Chen, Xianming; Pumphrey, Michael
2017-11-01
Harnessing diversity from germplasm collections is more feasible today because of the development of lower-cost and higher-throughput genotyping methods. However, the cost of phenotyping is still generally high, so efficient methods of sampling and exploiting useful diversity are needed. Genomic selection (GS) has the potential to enhance the use of desirable genetic variation in germplasm collections through predicting the genomic estimated breeding values (GEBVs) for all traits that have been measured. Here, we evaluated the effects of various scenarios of population genetic properties and marker density on the accuracy of GEBVs in the context of applying GS for wheat ( L.) germplasm use. Empirical data for adult plant resistance to stripe rust ( f. sp. ) collected on 1163 spring wheat accessions and genotypic data based on the wheat 9K single nucleotide polymorphism (SNP) iSelect assay were used for various genomic prediction tests. Unsurprisingly, the results of the cross-validation tests demonstrated that prediction accuracy increased with an increase in training population size and marker density. It was evident that using all the available markers (5619) was unnecessary for capturing the trait variation in the germplasm collection, with no further gain in prediction accuracy beyond 1 SNP per 3.2 cM (∼1850 markers), which is close to the linkage disequilibrium decay rate in this population. Collectively, our results suggest that larger germplasm collections may be efficiently sampled via lower-density genotyping methods, whereas genetic relationships between the training and validation populations remain critical when exploiting GS to select from germplasm collections. Copyright © 2017 Crop Science Society of America.
Shreay, Sanatan; Ma, Martin; McCluskey, Jill; Mittelhammer, Ron C; Gitlin, Matthew; Stephens, J Mark
2014-01-01
Objective To explore the relative efficiency of dialysis facilities in the United States and identify factors that are associated with efficiency in the production of dialysis treatments. Data Sources/Study Setting Medicare cost report data from 4,343 free-standing dialysis facilities in the United States that offered in-center hemodialysis in 2010. Study Design A cross-sectional, facility-level retrospective database analysis, utilizing data envelopment analysis (DEA) to estimate facility efficiency. Data Collection/Extraction Methods Treatment data and cost and labor inputs of dialysis treatments were obtained from 2010 Medicare Renal Cost Reports. Demographic data were obtained from the 2010 U.S. Census. Principal Findings Only 26.6 percent of facilities were technically efficient. Neither the intensity of market competition nor the profit status of the facility had a significant effect on efficiency. Facilities that were members of large chains were less likely to be efficient. Cost and labor savings due to changes in drug protocols had little effect on overall dialysis center efficiency. Conclusions The majority of free-standing dialysis facilities in the United States were functioning in a technically inefficient manner. As payment systems increasingly employ capitation and bundling provisions, these institutions will need to evaluate their efficiency to remain competitive. PMID:24237043
Design of runoff water harvesting systems and its role in minimizing water losses
NASA Astrophysics Data System (ADS)
Berliner, P.; Carmi, G.; Leake, S.; Agam, N.
2016-12-01
Precipitation is one of the major water sources for agricultural production in arid and semi-arid areas. Rainfalls are limited, erratic and not always coincide with the crop growing season. Only a part of the rain is absorbed by the soil. Soil evaporation is most severe in these regions and the large part of the absorbed water is lost to evaporation. The technique of collecting and conveying the runoff is known as runoff harvesting. Microcatchments are one of the primary techniques used for collecting, storing and conserving local surface runoff for growing trees/shrubs. In this system, runoff water is collected close-by the area in which it was generated, and trees/shrubs may utilize the water. The main objective of the present research was to estimate the effect of the design of the micro-catchment collection area (shallow basin and deep trench) has on the efficiency of the water conservation in the soil profile. The study was carried out during two years using regular micro-catchments (three replicates) with a surface area of 9 m2 (3 x 3 m) and a depth of 0.1 m and trenches (three replicates) with a surface area of 12 m2 (12 x 1 m) and 1 m depth. One and three olive trees were planted inside the trenches and micro-catchments, respectively. Access tubes for neutron probe were installed in micro-catchments and trenches (four and seven, respectively) to depths of 3m. Soil water content in the soil profile was monitored. Sap flow in trees was measured by PS-TDP8 Granier sap flow system every 0.5 hour and fluxes computed for the time intervals that correspond to the soil water measurements. The first year study included flooding trenches and regular micro-catchments once with the same amount of water (1.5 m3) and the second year study included flooding four times with 0.25 m3 each time. Flooding was followed by monitoring the water balance components and estimation of evaporation losses and water use efficiency by olive trees. Evaporation from trenches and regular micro-catchments was estimated as the difference between evapotranspiration obtained by soil water content monitoring and transpiration estimated by sap flow measurements. The results clearly show that the evaporation from the regular micro-catchments was significantly larger than that of trenches during the entire duration of the both experiments.
40 CFR Table Jj-6 to Subpart Jj of... - Collection Efficiencies of Anaerobic Digesters
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Collection Efficiencies of Anaerobic..., Table JJ-6 Table JJ-6 to Subpart JJ of Part 98—Collection Efficiencies of Anaerobic Digesters Anaerobic digester type Cover type Methane collection efficiency Covered anaerobic lagoon (biogas capture) Bank to...
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Power Recycled Weak Value Based Metrology
2015-04-29
PAGE The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...cavity, one is able to more efficiently use the input laser power by increasing the total power inside the interferometer. In the context of these weak...EV I EW LE T T ER S week ending 1 MAY 2015 0031-9007=15=114(17)=170801(5) 170801-1 © 2015 American Physical Society We consider a continuous wave laser
2017-06-05
DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND...THIS PAGE 19b. TELEPHONE NUMBER (include area code) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Cost & Performance Report June 2017
PHOTOSYNTHETIC EFFICIENCY OF MARINE PLANTS
Yocum, C. S.; Blinks, L. R.
1954-01-01
Multicellular marine plants were collected from their natural habitats and the quantum efficiency of their photosynthesis was determined in the laboratory in five narrow wave length bands in the visible spectrum. The results along with estimates of the relative absorption by the various plastid pigments show a fairly uniform efficiency of 0.08 molecules O2 per absorbed quantum for (a) chlorophyll of one flowering plant, green algae, and brown algae, (b) fucoxanthol and other carotenoids of brown algae, and (c) the phycobilin pigments phycocyanin and phycoerythrin of red algae. The carotenoids of green algae are sometimes less efficient while those of red algae are largely or entirely inactive. Chlorophyll a of red algae is about one-half as efficient (φo2 = 0.04) as either the phycobilins, or the chlorophyll of most other plants. These results as well as those of high intensity and of fluorescence experiments are consistent with a mechanism in which about half the chlorophyll is inactive while the other half is fully active and is an intermediate in phycoerythrin- and phycocyanin-sensitized photosynthesis. PMID:13192311
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Matthew S; Wilson, Mollye C
2007-07-01
Vacuum filter socks were evaluated for recovery efficiency of powdered Bacillus atrophaeus spores from two non-porous surfaces, stainless steel and painted wallboard and two porous surfaces, carpet and bare concrete. Two surface coupons were positioned side-by-side and seeded with aerosolized Bacillus atrophaeus spores. One of the surfaces, a stainless steel reference coupon, was sized to fit into a sample vial for direct spore removal, while the other surface, a sample surface coupon, was sized for a vacuum collection application. Deposited spore material was directly removed from the reference coupon surface and cultured for enumeration of colony forming units (CFU), while deposited spore material was collected from the sample coupon using the vacuum filter sock method, extracted by sonication and cultured for enumeration. Recovery efficiency, which is a measure of overall transfer effectiveness from the surface to culture, was calculated as the number of CFU enumerated from the filter sock sample per unit area relative to the number of CFU enumerated from the co-located reference coupon per unit area. The observed mean filter sock recovery efficiency from stainless steel was 0.29 (SD = 0.14, n = 36), from painted wallboard was 0.25 (SD = 0.15, n = 36), from carpet was 0.28 (SD = 0.13, n = 40) and from bare concrete was 0.19 (SD = 0.14, n = 44). Vacuum filter sock recovery quantitative limits of detection were estimated at 105 CFU m(-2) from stainless steel and carpet, 120 CFU m(-2) from painted wallboard and 160 CFU m(-2) from bare concrete. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling for biological agents such as Bacillus anthracis.
Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Bin; Chen, Luoping; Hu, Xiaozhe
2016-03-05
In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less
Khan, D; Samadder, S R
2016-07-01
Collection of municipal solid waste is one of the most important elements of municipal waste management and requires maximum fund allocated for waste management. The cost of collection and transportation can be reduced in comparison with the present scenario if the solid waste collection bins are located at suitable places so that the collection routes become minimum. This study presents a suitable solid waste collection bin allocation method at appropriate places with uniform distance and easily accessible location so that the collection vehicle routes become minimum for the city Dhanbad, India. The network analyst tool set available in ArcGIS was used to find the optimised route for solid waste collection considering all the required parameters for solid waste collection efficiently. These parameters include the positions of solid waste collection bins, the road network, the population density, waste collection schedules, truck capacities and their characteristics. The present study also demonstrates the significant cost reductions that can be obtained compared with the current practices in the study area. The vehicle routing problem solver tool of ArcGIS was used to identify the cost-effective scenario for waste collection, to estimate its running costs and to simulate its application considering both travel time and travel distance simultaneously. © The Author(s) 2016.
Crowdsourcing for Cognitive Science – The Utility of Smartphones
Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865
Crowdsourcing for cognitive science--the utility of smartphones.
Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.
Building beef cow nutritional programs with the 1996 NRC beef cattle requirements model.
Lardy, G P; Adams, D C; Klopfenstein, T J; Patterson, H H
2004-01-01
Designing a sound cow-calf nutritional program requires knowledge of nutrient requirements, diet quality, and intake. Effectively using the NRC (1996) beef cattle requirements model (1996NRC) also requires knowledge of dietary degradable intake protein (DIP) and microbial efficiency. Objectives of this paper are to 1) describe a framework in which 1996NRC-applicable data can be generated, 2) describe seasonal changes in nutrients on native range, 3) use the 1996NRC to predict nutrient balance for cattle grazing these forages, and 4) make recommendations for using the 1996NRC for forage-fed cattle. Extrusa samples were collected over 2 yr on native upland range and subirrigated meadow in the Nebraska Sandhills. Samples were analyzed for CP, in vitro OM digestibility (IVOMD), and DIP. Regression equations to predict nutrients were developed from these data. The 1996NRC was used to predict nutrient balances based on the dietary nutrient analyses. Recommendations for model users were also developed. On subirrigated meadow, CP and IVOMD increased rapidly during March and April. On native range, CP and IVOMD increased from April through June but decreased rapidly from August through September. Degradable intake protein (DM basis) followed trends similar to CP for both native range and subirrigated meadow. Predicted nutrient balances for spring- and summer-calving cows agreed with reported values in the literature, provided that IVOMD values were converted to DE before use in the model (1.07 x IVOMD - 8.13). When the IVOMD-to-DE conversion was not used, the model gave unrealistically high NE(m) balances. To effectively use the 1996NRC to estimate protein requirements, users should focus on three key estimates: DIP, microbial efficiency, and TDN intake. Consequently, efforts should be focused on adequately describing seasonal changes in forage nutrient content. In order to increase use of the 1996NRC, research is needed in the following areas: 1) cost-effective and accurate commercial laboratory procedures to estimate DIP, 2) reliable estimates or indicators of microbial efficiency for various forage types and qualities, 3) improved estimates of dietary TDN for forage-based diets, 4) validation work to improve estimates of DIP and MP requirements, and 5) incorporation of nitrogen recycling estimates.
Economics of immunization information systems in the United States: assessing costs and efficiency
Bartlett, Diana L; Molinari, Noelle-Angelique M; Ortega-Sanchez, Ismael R; Urquhart, Gary A
2006-01-01
Background One of the United States' national health objectives for 2010 is that 95% of children aged <6 years participate in fully operational population-based immunization information systems (IIS). Despite important progress, child participation in most IIS has increased slowly, in part due to limited economic knowledge about IIS operations. Should IIS need further improvement, characterizing costs and identifying factors that affect IIS efficiency become crucial. Methods Data were collected from a national sampling frame of the 56 states/cities that received federal immunization grants under U.S. Public Health Service Act 317b and completed the federal 1999 Immunization Registry Annual Report. The sampling frame was stratified by IIS functional status, children's enrollment in the IIS, and whether the IIS had been developed as an independent system or was integrated into a larger system. These sites self-reported IIS developmental and operational program costs for calendar years 1998–2002 using a standardized data collection tool and underwent on-site interviews to verify reported data with information from the state/city financial management system and other financial records. A parametric cost-per-patient-record (CPR) model was estimated. The model assessed the impact of labor and non-labor resources used in development and operations tasks, as well as the impact of information technology, local providers' participation and compliance with federal IIS performance standards (e.g., ensuring the confidentiality and security of information, ensure timely vaccination data at the time of patient encounter, and produce official immunization records). Given the number of records minimizing CPR, the additional amount of resources needed to meet national health goals for the year 2010 was also calculated. Results Estimated CPR was as high as $10.30 and as low as $0.09 in operating IIS. About 20% of IIS had between 2.9 to 3.2 million records and showed CPR estimates of $0.09. Overall, CPR was highly sensitive to local providers' participation. To achieve the 2010 goals, additional aggregated costs were estimated to be $75.6 million nationwide. Conclusion Efficiently increasing the number of records in IIS would require additional resources and careful consideration of various strategies to minimize CPR, such as boosting providers' participation. PMID:16925823
Lipid biomarker analysis for the quantitative analysis of airborne microorganisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macnaughton, S.J.; Jenkins, T.L.; Cormier, M.R.
1997-08-01
There is an ever increasing concern regarding the presence of airborne microbial contaminants within indoor air environments. Exposure to such biocontaminants can give rise to large numbers of different health effects including infectious diseases, allergenic responses and respiratory problems, Biocontaminants typically round in indoor air environments include bacteria, fungi, algae, protozoa and dust mites. Mycotoxins, endotoxins, pollens and residues of organisms are also known to cause adverse health effects. A quantitative detection/identification technique independent of culturability that assays both culturable and non culturable biomass including endotoxin is critical in defining risks from indoor air biocontamination. Traditionally, methods employed for themore » monitoring of microorganism numbers in indoor air environments involve classical culture based techniques and/or direct microscopic counting. It has been repeatedly documented that viable microorganism counts only account for between 0.1-10% of the total community detectable by direct counting. The classic viable microbiologic approach doe`s not provide accurate estimates of microbial fragments or other indoor air components that can act as antigens and induce or potentiate allergic responses. Although bioaerosol samplers are designed to damage the microbes as little as possible, microbial stress has been shown to result from air sampling, aerosolization and microbial collection. Higher collection efficiency results in greater cell damage while less cell damage often results in lower collection efficiency. Filtration can collect particulates at almost 100% efficiency, but captured microorganisms may become dehydrated and damaged resulting in non-culturability, however, the lipid biomarker assays described herein do not rely on cell culture. Lipids are components that are universally distributed throughout cells providing a means to assess independent of culturability.« less
78 FR 79418 - Agency Information Collection Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
...The Department of Energy (DOE), pursuant to the Paperwork Reduction Act of 1995), intends to extend for three years, an information collection request (OMB Control Number 1910-1700) with the Office of Management and Budget (OMB). The proposed voluntary collection will request that an individual or an authorized designee provide pertinent information for easy record retrieval allowing for increased efficiencies and quicker processing. Pertinent information includes the requester's name, shipping address, phone number, email address, previous work location, the action requested and any identifying data that will help locate the records (e.g., maiden name, occupational license number, time and place of employment). Comments are invited on: (a) whether the extended collection of information is necessary for the proper performance of the functions of the agency, including whether the information shall have practical utility; (b) the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility, and clarity of the information to be collected; and, (d) ways to minimize the burden of the collection of information on respondents, including through the use of automated collection techniques or other forms of information technology.
Efficient Estimation of the Standardized Value
ERIC Educational Resources Information Center
Longford, Nicholas T.
2009-01-01
We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)
Zhang, Xing; Tone, Kaoru; Lu, Yingzhe
2018-04-01
To assess the change in efficiency and total factor productivity (TFP) of the local public hospitals in Japan after the local public hospital reform launched in late 2007, which was aimed at improving the financial capability and operational efficiency of hospitals. Secondary data were collected from the Ministry of Internal Affairs and Communications on 213 eligible medium-sized hospitals, each operating 100-400 beds from FY2006 to FY2011. The improved slacks-based measure nonoriented data envelopment analysis models (Quasi-Max SBM nonoriented DEA models) were used to estimate dynamic efficiency score and Malmquist Index. The dynamic efficiency measure indicated an efficiency gain in the first several years of the reform and then was followed by a decrease. Malmquist Index analysis showed a significant decline in the TFP between 2006 and 2011. The financial improvement of medium-sized hospitals was not associated with enhancement of efficiency. Hospital efficiency was not significantly different among ownership structure and law-application system groups, but it was significantly affected by hospital location. The results indicate a need for region-tailored health care policies and for a more comprehensive reform to overcome the systemic constraints that might contribute to the decline of the TFP. © Health Research and Educational Trust.
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.; ...
2016-04-25
Improving our ability to estimate the parameters that control water and heat fluxes in the shallow subsurface is particularly important due to their strong control on recharge, evaporation and biogeochemical processes. The objectives of this study are to develop and test a new inversion scheme to simultaneously estimate subsurface hydrological, thermal and petrophysical parameters using hydrological, thermal and electrical resistivity tomography (ERT) data. The inversion scheme-which is based on a nonisothermal, multiphase hydrological model-provides the desired subsurface property estimates in high spatiotemporal resolution. A particularly novel aspect of the inversion scheme is the explicit incorporation of the dependence of themore » subsurface electrical resistivity on both moisture and temperature. The scheme was applied to synthetic case studies, as well as to real datasets that were autonomously collected at a biogeochemical field study site in Rifle, Colorado. At the Rifle site, the coupled hydrological-thermal-geophysical inversion approach well predicted the matric potential, temperature and apparent resistivity with the Nash-Sutcliffe efficiency criterion greater than 0.92. Synthetic studies found that neglecting the subsurface temperature variability, and its effect on the electrical resistivity in the hydrogeophysical inversion, may lead to an incorrect estimation of the hydrological parameters. The approach is expected to be especially useful for the increasing number of studies that are taking advantage of autonomously collected ERT and soil measurements to explore complex terrestrial system dynamics.« less
2017-01-01
In order to reliably predict and understand the breathing behavior of highly flexible metal–organic frameworks from thermodynamic considerations, an accurate estimation of the free energy difference between their different metastable states is a prerequisite. Herein, a variety of free energy estimation methods are thoroughly tested for their ability to construct the free energy profile as a function of the unit cell volume of MIL-53(Al). The methods comprise free energy perturbation, thermodynamic integration, umbrella sampling, metadynamics, and variationally enhanced sampling. A series of molecular dynamics simulations have been performed in the frame of each of the five methods to describe structural transformations in flexible materials with the volume as the collective variable, which offers a unique opportunity to assess their computational efficiency. Subsequently, the most efficient method, umbrella sampling, is used to construct an accurate free energy profile at different temperatures for MIL-53(Al) from first principles at the PBE+D3(BJ) level of theory. This study yields insight into the importance of the different aspects such as entropy contributions and anharmonic contributions on the resulting free energy profile. As such, this thorough study provides unparalleled insight in the thermodynamics of the large structural deformations of flexible materials. PMID:29131647
Demuynck, Ruben; Rogge, Sven M J; Vanduyfhuys, Louis; Wieme, Jelle; Waroquier, Michel; Van Speybroeck, Veronique
2017-12-12
In order to reliably predict and understand the breathing behavior of highly flexible metal-organic frameworks from thermodynamic considerations, an accurate estimation of the free energy difference between their different metastable states is a prerequisite. Herein, a variety of free energy estimation methods are thoroughly tested for their ability to construct the free energy profile as a function of the unit cell volume of MIL-53(Al). The methods comprise free energy perturbation, thermodynamic integration, umbrella sampling, metadynamics, and variationally enhanced sampling. A series of molecular dynamics simulations have been performed in the frame of each of the five methods to describe structural transformations in flexible materials with the volume as the collective variable, which offers a unique opportunity to assess their computational efficiency. Subsequently, the most efficient method, umbrella sampling, is used to construct an accurate free energy profile at different temperatures for MIL-53(Al) from first principles at the PBE+D3(BJ) level of theory. This study yields insight into the importance of the different aspects such as entropy contributions and anharmonic contributions on the resulting free energy profile. As such, this thorough study provides unparalleled insight in the thermodynamics of the large structural deformations of flexible materials.
Kovalchik, Stephanie A; Cumberland, William G
2012-05-01
Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Flux of Kilogram-Sized Meteoroids from Lunar Impact Monitoring
NASA Technical Reports Server (NTRS)
Suggs, Robert; Suggs, Ron; Cooke, William; McNamara, Heather; Diekmann, Anne; Moser, Danielle; Swift, Wesley
2008-01-01
Routine lunar impact monitoring has harvested over 110 impacts in 2 years of observations using 0.25, 0.36 and 0.5 m telescopes and low-light-level video cameras. The night side of the lunar surface provides a large collecting area for detecting these impacts and allows estimation of the flux of meteoroids down to a limiting luminous energy. In order to determine the limiting mass for these observations, models of the sporadic meteoroid environment were used to determine the velocity distribution and new measurements of luminous efficiency were made at the Ames Vertical Gun Range. The flux of meteoroids in this size range has implications for Near Earth Object populations as well as for estimating impact ejecta risk for future lunar missions.
NASA Astrophysics Data System (ADS)
Gao, Qian
For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification
NASA Astrophysics Data System (ADS)
Drees, L.; Roscher, R.
2017-05-01
This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.
Estimation of invertebrate production from patterns of fish predation in western Lake Superior
Johnson, Timothy B.; Mason, Doran M.; Bronte, Charles R.; Kitchell, James F.
1998-01-01
We used bioenergetic models for lake herring Coregonus artedi, bloater Coregonus hoyi, and rainbow smelt Osmerus mordax to estimate consumption of zooplankton,Mysis, andDiporeia in western Lake Superior for selected years between 1978 and 1995. Total invertebrate biomass consumed yearly ranged from 2.5 to 38 g/m2 with nearly 40% consumed between August and October in all years. Copepod zooplankton represented the largest proportion of biomass collectively consumed by the three species (81%), although rainbow smelt consumed almost twice as much Mysis as zooplankton. Growth efficiency was highest for rainbow smelt (3.84–16.64%) and lower for the coregonids (1.91–12.26%). In the absence of quantitative secondary production values, we suggest our estimates of predatory demand provide a conservative range of the minimum invertebrate production in western Lake Superior during the past 20 years.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-09-13
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-01-01
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602
Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A
2015-04-01
Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.
Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A.
2016-01-01
Summary Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence–absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence–absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model’s performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence–absence data for a given species is scarceIf we have only presence-only data and no presence–absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species’ geographic range. PMID:27840673
Campoy, José Antonio; Lerigoleur-Balsemin, Emilie; Christmann, Hélène; Beauvieux, Rémi; Girollet, Nabil; Quero-García, José; Dirlewanger, Elisabeth; Barreneche, Teresa
2016-02-24
Depiction of the genetic diversity, linkage disequilibrium (LD) and population structure is essential for the efficient organization and exploitation of genetic resources. The objectives of this study were to (i) to evaluate the genetic diversity and to detect the patterns of LD, (ii) to estimate the levels of population structure and (iii) to identify a 'core collection' suitable for association genetic studies in sweet cherry. A total of 210 genotypes including modern cultivars and landraces from 16 countries were genotyped using the RosBREED cherry 6 K SNP array v1. Two groups, mainly bred cultivars and landraces, respectively, were first detected using STRUCTURE software and confirmed by Principal Coordinate Analysis (PCoA). Further analyses identified nine subgroups using STRUCTURE and Discriminant Analysis of Principal Components (DAPC). Several sub-groups correspond to different eco-geographic regions of landraces distribution. Linkage disequilibrium was evaluated showing lower values than in peach, the reference Prunus species. A 'core collection' containing 156 accessions was selected using the maximum length sub tree method. The present study constitutes the first population genetics analysis in cultivated sweet cherry using a medium-density SNP (single nucleotide polymorphism) marker array. We provided estimations of linkage disequilibrium, genetic structure and the definition of a first INRA's Sweet Cherry core collection useful for breeding programs, germplasm management and association genetics studies.
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
Liu, Benmei; Yu, Mandi; Graubard, Barry I; Troiano, Richard P; Schenker, Nathaniel
2016-01-01
The Physical Activity Monitor (PAM) component was introduced into the 2003-2004 National Health and Nutrition Examination Survey (NHANES) to collect objective information on physical activity including both movement intensity counts and ambulatory steps. Due to an error in the accelerometer device initialization process, the steps data were missing for all participants in several primary sampling units (PSUs), typically a single county or group of contiguous counties, who had intensity count data from their accelerometers. To avoid potential bias and loss in efficiency in estimation and inference involving the steps data, we considered methods to accurately impute the missing values for steps collected in the 2003-2004 NHANES. The objective was to come up with an efficient imputation method which minimized model-based assumptions. We adopted a multiple imputation approach based on Additive Regression, Bootstrapping and Predictive mean matching (ARBP) methods. This method fits alternative conditional expectation (ace) models, which use an automated procedure to estimate optimal transformations for both the predictor and response variables. This paper describes the approaches used in this imputation and evaluates the methods by comparing the distributions of the original and the imputed data. A simulation study using the observed data is also conducted as part of the model diagnostics. Finally some real data analyses are performed to compare the before and after imputation results. PMID:27488606
Polydimethylsiloxane-based optical waveguides for tetherless powering of floating microstimulators
NASA Astrophysics Data System (ADS)
Ersen, Ali; Sahin, Mesut
2017-05-01
Neural electrodes and associated electronics are powered either through percutaneous wires or transcutaneous powering schemes with energy harvesting devices implanted underneath the skin. For electrodes implanted in the spinal cord and the brain stem that experience large displacements, wireless powering may be an option to eliminate device failure by the breakage of wires and the tethering of forces on the electrodes. We tested the feasibility of using optically clear polydimethylsiloxane (PDMS) as a waveguide to collect the light in a subcutaneous location and deliver to deeper regions inside the body, thereby replacing brittle metal wires tethered to the electrodes with PDMS-based optical waveguides that can transmit energy without being attached to the targeted electrode. We determined the attenuation of light along the PDMS waveguides as 0.36±0.03 dB/cm and the transcutaneous light collection efficiency of cylindrical waveguides as 44%±11% by transmitting a laser beam through the thenar skin of human hands. We then implanted the waveguides in rats for a month to demonstrate the feasibility of optical transmission. The collection efficiency and longitudinal attenuation values reported here can help others design their own waveguides and make estimations of the waveguide cross-sectional area required to deliver sufficient power to a certain depth in tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Southam, B.J.; Coe, E.L. Jr.
1995-12-01
Many relatively small electrostatic precipitators (ESP`s) exist which collect fly ash at remarkably high efficiencies and have been tested consistently at correspondingly high migration velocities. But the majority of the world`s coal supplies produce ashes which are collected at much lower migration velocities for a given efficiency and therefore require correspondingly large specific collection areas to achieve acceptable results. Early trials of flue gas conditioning (FGC) showed benefits in maximizing ESP performance and minimizing expense which justified continued experimentation. Trials of several dozen ways of doing it wrong eventually developed a set of reliable rules for doing it right. Onemore » result is that the use of sulfur trioxide (SO{sub 3}) for adjustment of the resistivity of fly ash from low sulfur coal has been widely applied and has become an automatically accepted part of the option of burning low sulfur coal for compliance with the Clean Air Act of l990 in the U.S.A. Currently, over 100,000 MW of generating capacity is using FGC, and it is estimated that approximately 45,800 MW will utilize coal-switching with FGC for Clean Air Act emission compliance. Guarantees that this equipment will be available to operate at least 98 percent of the time it is called upon are routinely fulfilled.« less
Increasing the efficiency of digitization workflows for herbarium specimens.
Tulig, Melissa; Tarnowsky, Nicole; Bevans, Michael; Anthony Kirchgessner; Thiers, Barbara M
2012-01-01
The New York Botanical Garden Herbarium has been databasing and imaging its estimated 7.3 million plant specimens for the past 17 years. Due to the size of the collection, we have been selectively digitizing fundable subsets of specimens, making successive passes through the herbarium with each new grant. With this strategy, the average rate for databasing complete records has been 10 specimens per hour. With 1.3 million specimens databased, this effort has taken about 130,000 hours of staff time. At this rate, to complete the herbarium and digitize the remaining 6 million specimens, another 600,000 hours would be needed. Given the current biodiversity and economic crises, there is neither the time nor money to complete the collection at this rate.Through a combination of grants over the last few years, The New York Botanical Garden has been testing new protocols and tactics for increasing the rate of digitization through combinations of data collaboration, field book digitization, partial data entry and imaging, and optical character recognition (OCR) of specimen images. With the launch of the National Science Foundation's new Advancing Digitization of Biological Collections program, we hope to move forward with larger, more efficient digitization projects, capturing data from larger portions of the herbarium at a fraction of the cost and time.
Increasing the efficiency of digitization workflows for herbarium specimens
Tulig, Melissa; Tarnowsky, Nicole; Bevans, Michael; Anthony Kirchgessner; Thiers, Barbara M.
2012-01-01
Abstract The New York Botanical Garden Herbarium has been databasing and imaging its estimated 7.3 million plant specimens for the past 17 years. Due to the size of the collection, we have been selectively digitizing fundable subsets of specimens, making successive passes through the herbarium with each new grant. With this strategy, the average rate for databasing complete records has been 10 specimens per hour. With 1.3 million specimens databased, this effort has taken about 130,000 hours of staff time. At this rate, to complete the herbarium and digitize the remaining 6 million specimens, another 600,000 hours would be needed. Given the current biodiversity and economic crises, there is neither the time nor money to complete the collection at this rate. Through a combination of grants over the last few years, The New York Botanical Garden has been testing new protocols and tactics for increasing the rate of digitization through combinations of data collaboration, field book digitization, partial data entry and imaging, and optical character recognition (OCR) of specimen images. With the launch of the National Science Foundation’s new Advancing Digitization of Biological Collections program, we hope to move forward with larger, more efficient digitization projects, capturing data from larger portions of the herbarium at a fraction of the cost and time. PMID:22859882
3D-measurement using a scanning electron microscope with four Everhart-Thornley detectors
NASA Astrophysics Data System (ADS)
Vynnyk, Taras; Scheuer, Renke; Reithmeier, Eduard
2011-06-01
Due to the emerging degree of miniaturization in microstructures, Scanning-Electron-Microscopes (SEM) have become important instruments in the quality assurance of chip manufacturing. With a two- or multiple detector system for secondary electrons, a SEM can be used for the reconstruction of three dimensional surface profiles. Although there are several projects dealing with the reconstruction of three dimensional surfaces using electron microscopes with multiple Everhart-Thornley detectors (ETD), there is no profound knowledge of the behaviour of emitted electrons. Hence, several values, which are used for reconstruction algorithms, such as the photometric method, are only estimates; for instance, the exact collection efficiency of the ETD, which is still unknown. This paper deals with the simulation of electron trajectories in a one-, two- and four-detector system with varying working distances and varying grid currents. For each detector, the collection efficiency is determined by taking the working distance and grid current into account. Based on the gathered information, a new collection grid, which provides a homogenous emission signal for each detector of a multiple detector system, is developed. Finally, the results of the preceding tests are utilized for a reconstruction of a three dimensional surface using the photometric method with a non-lambert intensity distribution.
Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M
2017-07-11
Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals
Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew
2011-01-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.
Urban solid waste generation and disposal in Mexico: a case study.
Buenrostro, O; Bocco, G; Bernache, G
2001-04-01
The adequate management of municipal solid waste in developing countries is difficult because of the scarcity of studies about their composition. This paper analyses the composition of urban solid waste (USW) in the city of Morelia, Michoacán, Mexico. Residential and non-residential waste sources were sampled, and a structured interview was made to evaluate the socioeconomic characteristics of the studied area. Also, to determine the seasonal patterns of solid waste generation and the efficiency level of the collection service, quantification of solid waste deposited in the dumping ground was measured. Our results show that the recorded amount of SW deposited in the municipal dumping-ground is less than the estimated amount of SW generated; for this reason, the former amount is not recommended as an unbiased indicator for planning public waste collection services. It is essential that dumping-grounds are permanently monitored and that the incoming waste be weighed in order to have a more efficient record of USW deposited in the dumping-ground per day; these data are fundamental for developing adequate managing strategies.
Improving the water use efficiency of olive trees growing in water harvesting systems
NASA Astrophysics Data System (ADS)
Berliner, Pedro; Leake, Salomon; Carmi, Gennady; Agam, Nurit
2017-04-01
Water is a primary limiting factor for agricultural development in many arid and semi-arid regions in which a runoff generation is a rather frequent event. If conveyed to dyke surrounded plots and ponded, runoff water can thereafter be used for tree production. One of the most promising runoff collection configurations is that of micro-catchments in which water is collected close to the area in which runoff was generated and stored in adjacent shallow pits. The objective of this work was to assess the effect of the geometry of runoff water collection area (shallow pit or trench) on direct evaporative water losses and on the water use efficiency of olive trees grown in them. The study was conducted during the summer of 2013 and 2014. In this study regular micro-catchments with basins of 9 m2 (3 x 3 m) by 0.1 m deep were compared with trenches of one meter deep and one meter wide. Each configuration was replicated three times. One tree was planted in each shallow basin and the distance between trees in the 12 m long trench was four meters. Access tubes for neutron probes were installed in the micro-catchments and trenches (four and seven, respectively) to depths of 2.5 m. Soil water content in the soil profile was monitored periodically throughout drying periods in between simulated runoff events. Transpiration of the trees was estimated from half-hourly sap flow measurements using a Granier system. Total transpiration fluxes were computed for time intervals corresponding to consecutive soil water measurements. During the first year, a large runoff event was simulated by applying once four cubic meters to each plot; and in the second year the same volume of water was split into four applications, simulating a series of small runoff events. In both geometries, trees received the same amount of water per tree. Evaporation from trenches and micro-catchments was estimated as the difference between evapotranspiration obtained computing the differences in total soil water content between two consecutive measurements and transpiration for this interval estimated from sap flow measurements. In both years the evaporation from micro-catchments was significantly larger than that of trenches. The fractional loss due to evaporation from the total applied water for the second year for example, was 53% and 22% for micro-catchments and trenches, respectively. This indicates that a trench geometry reduces the amount of water lost to direct evaporation from the soil, and is thus more efficient in utilizing harvested runoff water.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Galtseva, I V; Davydova, Yu O; Gaponova, T V; Kapranov, N M; Kuzmina, L A; Troitskaya, V V; Gribanova, E O; Kravchenko, S K; Mangasarova, Ya K; Zvonkov, E E; Parovichnikova, E N; Mendeleeva, L P; Savchenko, V G
To identify a parameter predicting a collection of at least 2·106 CD34+ hematopoietic stem cells (HSC)/kg body weight per leukapheresis (LA) procedure. The investigation included 189 patients with hematological malignancies and 3 HSC donors, who underwent mobilization of stem cells with their subsequent collection by LA. Absolute numbers of peripheral blood leukocytes and CD34+ cells before a LA procedure, as well as a number of CD34+ cells/kg body weight (BW) in the LA product stored on the same day were determined in each patient (donor). There was no correlation between the number of leukocytes and that of stored CD34+ cells/kg BW. There was a close correlation between the count of peripheral blood CD34+ cells prior to LA and that of collected CD34+ cells calculated with reference to kg BW. The optimal absolute blood CD34+ cell count was estimated to 20 per µl, at which a LA procedure makes it possible to collect 2·106 or more CD34+ cells/kg BW.
NASA Astrophysics Data System (ADS)
Scozzari, Andrea; Raco, Brunella; Battaglini, Raffaele
2016-04-01
This work presents the results of more than ten years of observations, performed on a regular basis, on a municipal solid waste disposal located in Italy. Observational data are generated by the combination of non-invasive techniques, involving the direct measurement of biogas release to the atmosphere and thermal infrared imaging. In fact, part of the generated biogas tends to escape from the landfill surface even when collecting systems are installed and properly working. Thus, methodologies for estimating the behaviour of a landfill system by means of direct and/or indirect measurement systems have been developed in the last decades. It is nowadays known that these infrastructures produce more than 20% of the total anthropogenic methane released to the atmosphere, justifying the need for a systematic and efficient monitoring of such infrastructures. During the last 12 years, observational data regarding a solid waste disposal site located in Tuscany (Italy) have been collected on a regular basis. The collected datasets consist in direct measurements of gas flux with the accumulation chamber method, combined with the detection of thermal anomalies by infrared radiometry. This work discusses the evolution of the estimated performance of the landfill system, its trends, the benefits and the critical aspects of such relatively long-term monitoring activity.
Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R
2016-03-07
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.
Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.
2016-01-01
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046
Genetic Diversity and Population Structure of Cowpea (Vigna unguiculata L. Walp).
Xiong, Haizheng; Shi, Ainong; Mou, Beiquan; Qin, Jun; Motes, Dennis; Lu, Weiguo; Ma, Jianbing; Weng, Yuejin; Yang, Wei; Wu, Dianxing
2016-01-01
The genetic diversity of cowpea was analyzed, and the population structure was estimated in a diverse set of 768 cultivated cowpea genotypes from the USDA GRIN cowpea collection, originally collected from 56 countries. Genotyping by sequencing was used to discover single nucleotide polymorphism (SNP) in cowpea and the identified SNP alleles were used to estimate the level of genetic diversity, population structure, and phylogenetic relationships. The aim of this study was to detect the gene pool structure of cowpea and to determine its relationship between different regions and countries. Based on the model-based ancestry analysis, the phylogenetic tree, and the principal component analysis, three well-differentiated genetic populations were postulated from 768 worldwide cowpea genotypes. According to the phylogenetic analyses between each individual, region, and country, we may trace the accession from off-original, back to the two candidate original areas (West and East of Africa) to predict the migration and domestication history during the cowpea dispersal and development. To our knowledge, this is the first report of the analysis of the genetic variation and relationship between globally cultivated cowpea genotypes. The results will help curators, researchers, and breeders to understand, utilize, conserve, and manage the collection for more efficient contribution to international cowpea research.
Genetic Diversity and Population Structure of Cowpea (Vigna unguiculata L. Walp)
Xiong, Haizheng; Shi, Ainong; Mou, Beiquan; Qin, Jun; Motes, Dennis; Lu, Weiguo; Ma, Jianbing; Weng, Yuejin; Yang, Wei; Wu, Dianxing
2016-01-01
The genetic diversity of cowpea was analyzed, and the population structure was estimated in a diverse set of 768 cultivated cowpea genotypes from the USDA GRIN cowpea collection, originally collected from 56 countries. Genotyping by sequencing was used to discover single nucleotide polymorphism (SNP) in cowpea and the identified SNP alleles were used to estimate the level of genetic diversity, population structure, and phylogenetic relationships. The aim of this study was to detect the gene pool structure of cowpea and to determine its relationship between different regions and countries. Based on the model-based ancestry analysis, the phylogenetic tree, and the principal component analysis, three well-differentiated genetic populations were postulated from 768 worldwide cowpea genotypes. According to the phylogenetic analyses between each individual, region, and country, we may trace the accession from off-original, back to the two candidate original areas (West and East of Africa) to predict the migration and domestication history during the cowpea dispersal and development. To our knowledge, this is the first report of the analysis of the genetic variation and relationship between globally cultivated cowpea genotypes. The results will help curators, researchers, and breeders to understand, utilize, conserve, and manage the collection for more efficient contribution to international cowpea research. PMID:27509049
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Smith, Kirk P.
2002-01-01
Best management practices (BMPs) near highways are designed to reduce the amount of suspended sediment and associated constituents, including debris and litter, discharged from the roadway surface. The effectiveness of a deep-sumped hooded catch basin, three 2-chambered 1,500-gallon oil-grit separators, and mechanized street sweeping in reducing sediment and associated constituents was examined along the Southeast Expressway (Interstate Route 93) in Boston, Massachusetts. Repeated observations of the volume and distribution of bottom material in the oil-grit separators, including data on particle-size distributions, were compared to data from bottom material deposited during the initial 3 years of operation. The performance of catch-basin hoods and the oil-grit separators in reducing floating debris was assessed by examining the quantity of material retained by each structural BMP compared to the quantity of material retained by and discharged from the oil-grit separators, which received flow from the catch basins. The ability of each structural BMP to reduce suspended-sediment loads was assessed by examining (a) the difference in the concentrations of suspended sediment in samples collected simultaneously from the inlet and outlet of each BMP, and (b) the difference between inlet loads and outlet loads during a 14-month monitoring period for the catch basin and one separator, and a 10-month monitoring period for the second separator. The third separator was not monitored continuously; instead, samples were collected from it during three visits separated in time by several months. Suspended-sediment loads for the entire study area were estimated on the basis of the long-term average annual precipitation and the estimated inlet and outlet loads of two of the separators. The effects of mechanized street sweeping were assessed by evaluating the differences between suspended-sediment loads before and after street sweeping, relative to storm precipitation totals, and by comparing the particle-size distributions of sediment samples collected from the sweepers to bottom-material samples collected from the structural BMPs. A mass-balance calculation was used to quantify the accuracy of the estimated sediment-removal efficiency for each structural BMP. The ability of each structural BMP to reduce concentrations of inorganic and organic constituents was assessed by determining the differences in concentrations between the inlets and outlets of the BMPs for four storms. The inlet flows of the separators were sampled during five storms for analysis of fecal-indicator bacteria. The particle-size distribution of bottom material found in the first and second chambers of the separators was similar for all three separators. Consistent collection of floatable debris at the outlet of one separator during 12 storms suggests that floatable debris were not indefinitely retained.Concentrations of suspended sediment in discrete samples of runoff collected from the inlets of the two separators ranged from 8.5 to 7,110 mg/L. Concentrations of suspended sediment in discrete samples of runoff collected from the outlets of the separators ranged from 5 to 2,170 mg/L. The 14-month sediment-removal efficiency was 35 percent for one separator, and 28 percent for the second separator. In the combined-treatment system in this study, where catch basins provided primary suspended-sediment treatment, the separators reduced the mass of the suspended sediment from the pavement by about an additional 18 percent. The concentrations of suspended sediment in discrete samples of runoff collected from the inlet of the catch basin ranged from 32 to 13,600 mg/L. Concentrations of suspended sediment in discrete samples of runoff collected from the outlet of the catch basin ranged from 25.7 to 7,030 mg/L. The sediment-removal efficiency for individual storms during the 14-month monitoring period for the deep-sumped hooded catch basin was 39 percent.The concentrations of 29 in
An evaluation of multipass electrofishing for estimating the abundance of stream-dwelling salmonids
James T. Peterson; Russell F. Thurow; John W. Guzevich
2004-01-01
Failure to estimate capture efficiency, defined as the probability of capturing individual fish, can introduce a systematic error or bias into estimates of fish abundance. We evaluated the efficacy of multipass electrofishing removal methods for estimating fish abundance by comparing estimates of capture efficiency from multipass removal estimates to capture...
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
TRIADS: A phase-resolving model for nonlinear shoaling of directional wave spectra
NASA Astrophysics Data System (ADS)
Sheremet, Alex; Davis, Justin R.; Tian, Miao; Hanson, Jeffrey L.; Hathaway, Kent K.
2016-03-01
We investigate the performance of TRIADS, a numerical implementation of a phase-resolving, nonlinear, spectral model describing directional wave evolution in intermediate and shallow water. TRIADS simulations of shoaling waves generated by Hurricane Bill, 2009 are compared to directional spectral estimates based on observations collected at the Field Research Facility of the US Army Corps Of Engineers, at Duck, NC. Both the ability of the model to capture the processes essential to the nonlinear wave evolution, and the efficiency of the numerical implementations are analyzed and discussed.
Efficiency of wipe sampling on hard surfaces for pesticides and PCB residues in dust.
Cettier, Joane; Bayle, Marie-Laure; Béranger, Rémi; Billoir, Elise; Nuckols, John R; Combourieu, Bruno; Fervers, Béatrice
2015-02-01
Pesticides and polychlorinated biphenyls (PCBs) are commonly found in house dust and have been described as a valuable matrix to assess indoor pesticide and PCB contamination. The aim of this study was to assess the efficiency and precision of cellulose wipe for collecting 48 pesticides, eight PCBs, and one synergist at environmental concentrations. First, the efficiency and repeatability of wipe collection were determined for pesticide and PCB residues that were directly spiked onto three types of household floors (tile, laminate, and hardwood). Second, synthetic dust was used to assess the capacity of the wipe to collect dust. Third, we assessed the efficiency and repeatability of wipe collection of pesticides and PCB residues that was spiked onto synthetic dust and then applied to tile. In the first experiment, the overall collection efficiency was highest on tile (38%) and laminate (40%) compared to hardwood (34%), p<0.001. The second experiment confirmed that cellulose wipes can efficiently collect dust (82% collection efficiency). The third experiment showed that the overall collection efficiency was higher in the presence of dust (72% vs. 38% without dust, p<0.001). Furthermore, the mean repeatability also improved when compounds were spiked onto dust (<30% for the majority of compounds). To our knowledge, this is the first study to assess the efficiency of wipes as a sampling method using a large number of compounds at environmental concentrations and synthetic dust. Cellulose wipes appear to be efficient to sample the pesticides and PCBs that adsorb onto dust on smooth and hard surfaces. Copyright © 2014 Elsevier B.V. All rights reserved.
Neck-band retention for Canada geese in the Mississippi (USA) flyway
Samuel, M.D.; Weiss, N.T.; Rusch, D.H.; Craven, S.R.; Trost, R.E.; Caswell, F.D.
1990-01-01
We used capture, harvest, and observation histories of Canada geese (Branta canadensis) banded in the Mississippi flyway, 1974-88, to examine the problem of neck-band retention. Methods for the analysis of survival data were used to estimate rates of neck-band retention and to evaluate factors associated with neck-band loss. Sex, age of bird at banding, rivet use, and neck-band type significantly influenced neck-band retention. For most of the resulting cohorts (e.g., sex, age, rivet, and neck-band type categories), neck-band retention rates decreased through time. We caution against using small samples or data collected during short-term studies to determine retention rates. We suggest that observation data be used in neck-band retention studies to increase the efficiency of estimating retention time.
Design and analysis issues for economic analysis alongside clinical trials.
Marshall, Deborah A; Hux, Margaret
2009-07-01
Clinical trials can offer a valuable and efficient opportunity to collect the health resource use and outcomes data for economic evaluation. However, economic and clinical studies differ fundamentally in the question they seek to answer. The design and analysis of trial-based cost-effectiveness studies require special consideration, which are reviewed in this article. Traditional randomized controlled trials, using an experimental design with a controlled protocol, are designed to measure safety and efficacy for product registration. Cost-effectiveness analysis seeks to measure effectiveness in the context of routine clinical practice, and requires collection of health care resources to allow estimation of cost over an equal timeframe for each treatment alternative. In assessing suitability of a trial for economic data collection, the comparator treatment and other protocol factors need to reflect current clinical practice and the trial follow-up must be sufficiently long to capture important costs and effects. The broadest available population and a measure of effectiveness reflecting important benefits for patients are preferred for economic analyses. Special analytical issues include dealing with missing and censored cost data, assessing uncertainty of the incremental cost-effectiveness ratio, and accounting for the underlying heterogeneity in patient subgroups. Careful consideration also needs to be given to data from multinational studies since practice patterns can differ across countries. Although clinical trials can be an efficient opportunity to collect data for economic evaluation, careful consideration of the suitability of the study design, and appropriate analytical methods must be applied to obtain rigorous results.
Efficiency assessment of using satellite data for crop area estimation in Ukraine
NASA Astrophysics Data System (ADS)
Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga
2014-06-01
The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.
Subramanian, Sujha; Tangka, Florence; Edwards, Patrick; Hoover, Sonja; Cole-Beebe, Maggie
2016-12-01
This article reports on the methods and framework we have developed to guide economic evaluation of noncommunicable disease registries. We developed a cost data collection instrument, the Centers for Disease Control and Prevention's (CDC's) International Registry Costing Tool (IntRegCosting Tool), based on established economics methods We performed in-depth case studies, site visit interviews, and pilot testing in 11 registries from multiple countries including India, Kenya, Uganda, Colombia, and Barbados to assess the overall quality of the data collected from cancer and cardiovascular registries. Overall, the registries were able to use the IntRegCosting Tool to assign operating expenditures to specific activities. We verified that registries were able to provide accurate estimation of labor costs, which is the largest expenditure incurred by registries. We also identified several factors that can influence the cost of registry operations, including size of the geographic area served, data collection approach, local cost of living, presence of rural areas, volume of cases, extent of consolidation of records to cases, and continuity of funding. Internal and external registry factors reveal that a single estimate for the cost of registry operations is not feasible; costs will vary on the basis of factors that may be beyond the control of the registries. Some factors, such as data collection approach, can be modified to improve the efficiency of registry operations. These findings will inform both future economic data collection using a web-based tool and cost and cost-effectiveness analyses of registry operations in low- and middle-income countries (LMICs) and other locations with similar characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jiang, Xiao; Pan, Maohua; Hering, Susanne V; Lednicky, John A; Wu, Chang-Yu; Fan, Z Hugh
2016-10-01
The spread of virus-induced infectious diseases through airborne routes of transmission is a global concern for economic and medical reasons. To study virus transmission, it is essential to have an effective aerosol collector such as the growth tube collector (GTC) system that utilizes water-based condensation for collecting virus-containing aerosols. In this work, we characterized the GTC system using bacteriophage MS2 as a surrogate for a small RNA virus. We investigated using RNA extraction and reverse transcription- polymerase chain reaction (RT-PCR) to study the total virus collection efficiency of the GTC system. Plaque assays were also used to enumerate viable viruses collected by the GTC system compared to that by a commercially available apparatus, the SKC® Biosampler. The plaque assay counts were used to enumerate viable viruses whereas RT-PCR provides a total virus count, including those viruses inactivated during collection. The effects of relative humidity (RH) and other conditions on collection efficiency were also investigated. Our results suggest that the GTC has a collection efficiency for viable viruses between 0.24 and 1.8% and a total virus collection efficiency between 18.3 and 79.0%, which is 1-2 orders of magnitude higher than that of the SKC® Biosampler. Moreover, higher RH significantly increases both the viable and total collection efficiency of the GTC, while its effect on the collection efficiency of the SKC® Biosampler is not significant. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Detilleux, J
2017-06-08
In most infectious diseases, among which bovine mastitis, promptness of the recruitment of inflammatory cells (mainly neutrophils) in inflamed tissues has been shown to be of prime importance in the resolution of the infection. Although this information should aid in designing efficient control strategies, it has never been quantified in field studies. Here, a system of ordinary differential equations is proposed that describes the dynamic process of the inflammatory response to mammary pathogens. The system was tested, by principal differential analysis, on 1947 test-day somatic cell counts collected on 756 infected cows, from 50 days before to 50 days after the diagnosis of clinical mastitis. Cell counts were log-transformed before estimating recruitment rates. Daily rates of cellular recruitment was estimated at 0.052 (st. err. = 0.005) during health. During disease, an additional cellular rate of recruitment was estimated at 0.004 (st. err. = 0.001) per day and per bacteria. These estimates are in agreement with analogous measurements of in vitro neutrophil functions. Results suggest the method is adequate to estimate one of the components of innate resistance to mammary pathogens at the individual level and in field studies. Extension of the method to estimate components of innate tolerance and limits of the study are discussed.
Efficiency and economic benefits of skipjack pole and line (huhate) in central Moluccas, Indonesia
NASA Astrophysics Data System (ADS)
Siahainenia, Stevanus M.; Hiariey, Johanis; Baskoro, Mulyono S.; Waeleruny, Wellem
2017-10-01
Excess fishing capacity is a crucial problem in marine capture fisheries. This phenomenon needed to be investigated regarding sustainability and development of the fishery. This research was aimed at analyzing technical efficiency (TE) and computing financial aspects of the skipjack pole and line. Primary data were collected from the owners of the fishing units at the different size of gross boat tonnage (GT), while secondary data were gathered from official publications relating to this research. Data envelopment analysis (DEA) approach was applied to estimate technical efficiency whereas a selected financial analysis was utilized to calculate economic benefits of the skipjack pole and line business. The fishing units with a size of 26-30 GT provided a higher TE value, and also achieved larger economic benefit values than that of the other fishing units. The empirical results indicate that skipjack pole and line in the size of 26-30 GT is a good fishing gear for the business development in central Moluccas.
Karafin, Matthew S; Graminske, Sharon; Erickson, Paulette; Walters, Mark C; Scott, Edward P; Carter, Scott; Padmanabhan, Anand
2014-10-01
The Spectra Optia apheresis system is a newer centrifugation-based device that in comparison with the COBE Spectra includes features that enhance procedure automation and usability. In this FDA-approved three-center two-arm observational study we characterized the performance of the Spectra Optia for collection of MNCs and CD34+ cells from nonmobilized and granulocyte-colony stimulating factor (G-CSF) mobilized healthy donors, respectively. There were a total of 15 evaluable subjects in each arm. Key performance indicators included collection efficiency of MNCs/CD34+ cells, product purity and cellular viability. For nonmobilized donors, median MNC collection efficiency, platelet collection efficiency, product hematocrit and granulocyte contamination were 57%, 12%, 4%, and 1.7%, respectively. For mobilized donors, median MNC collection efficiency, CD34+ cell collection efficiency, platelet collection efficiency, product hematocrit and granulocyte contamination were 61%, 77%, 19%, 4%, and 15%, respectively. Average WBC viability in the mobilized products was 99%. There was one severe (grade 3) adverse event related to citrate toxicity. This study demonstrates that the Spectra Optia can be used for safe and efficacious collection of MNCs, and results obtained are in line with expectations on collection efficiency and product characteristics. Adverse events were limited to those that are well documented in the stem-cell mobilization and leukapheresis process. As of the time of this writing, FDA 510(k) approval for use of the Spectra Optia device for MNC collection was achieved in the US based partly on the results of this study. © 2014 Wiley Periodicals, Inc.
Task-based design of a synthetic-collimator SPECT system used for small animal imaging.
Lin, Alexander; Kupinski, Matthew A; Peterson, Todd E; Shokouhi, Sepideh; Johnson, Lindsay C
2018-05-07
In traditional multipinhole SPECT systems, image multiplexing - the overlapping of pinhole projection images - may occur on the detector, which can inhibit quality image reconstructions due to photon-origin uncertainty. One proposed system to mitigate the effects of multiplexing is the synthetic-collimator SPECT system. In this system, two detectors, a silicon detector and a germanium detector, are placed at different distances behind the multipinhole aperture, allowing for image detection to occur at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. The unwanted effects of multiplexing are reduced by utilizing the additional data collected from the front silicon detector. However, determining optimal system configurations for a given imaging task requires efficient parsing of the complex parameter space, to understand how pinhole spacings and the two detector distances influence system performance. In our simulation studies, we use the ensemble mean-squared error of the Wiener estimator (EMSE W ) as the figure of merit to determine optimum system parameters for the task of estimating the uptake of an 123 I-labeled radiotracer in three different regions of a computer-generated mouse brain phantom. The segmented phantom map is constructed by using data from the MRM NeAt database and allows for the reduction in dimensionality of the system matrix which improves the computational efficiency of scanning the system's parameter space. To contextualize our results, the Wiener estimator is also compared against a region of interest estimator using maximum-likelihood reconstructed data. Our results show that the synthetic-collimator SPECT system outperforms traditional multipinhole SPECT systems in this estimation task. We also find that image multiplexing plays an important role in the system design of the synthetic-collimator SPECT system, with optimal germanium detector distances occurring at maxima in the derivative of the percent multiplexing function. Furthermore, we report that improved task performance can be achieved by using an adaptive system design in which the germanium detector distance may vary with projection angle. Finally, in our comparative study, we find that the Wiener estimator outperforms the conventional region of interest estimator. Our work demonstrates how this optimization method has the potential to quickly and efficiently explore vast parameter spaces, providing insight into the behavior of competing factors, which are otherwise very difficult to calculate and study using other existing means. © 2018 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Dan; Ricciuto, Daniel; Walker, Anthony
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; ...
2017-02-22
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Biology, population structure, and estimated forage requirements of lake trout in Lake Michigan
Eck, Gary W.; Wells, LaRue
1983-01-01
Data collected during successive years (1971-79) of sampling lake trout (Salvelinus namaycush) in Lake Michigan were used to develop statistics on lake trout growth, maturity, and mortality, and to quantify seasonal lake trout food and food availability. These statistics were then combined with data on lake trout year-class strengths and age-specific food conversion efficiencies to compute production and forage fish consumption by lake trout in Lake Michigan during the 1979 growing season (i.e., 15 May-1 December). An estimated standing stock of 1,486 metric tons (t) at the beginning of the growing season produced an estimated 1,129 t of fish flesh during the period. The lake trout consumed an estimated 3,037 t of forage fish, to which alewives (Alosa pseudoharengus) contributed about 71%, rainbow smelt (Osmerus mordax) 18%, and slimy sculpins (Cottus cognatus) 11%. Seasonal changes in bathymetric distributions of lake trout with respect to those of forage fish of a suitable size for prey were major determinants of the size and species compositions of fish in the seasonal diet of lake trout.
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
USDA-ARS?s Scientific Manuscript database
Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...
NASA Astrophysics Data System (ADS)
Kentel, E.; Dogulu, N.
2015-12-01
In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Postigo, Cristina; López de Alda, María José; Barceló, Damià
2010-01-01
Drugs of abuse and their metabolites have been recently recognized as environmental emerging organic contaminants. Assessment of their concentration in different environmental compartments is essential to evaluate their potential ecotoxicological effects. It also constitutes an indirect tool to estimate drug abuse by the population at the community level. The present work reports for the first time the occurrence of drugs of abuse and metabolites residues along the Ebro River basin (NE Spain) and also evaluates the contribution of sewage treatment plants (STPs) effluents to the presence of these chemicals in natural surface waters. Concentrations measured in influent sewage waters were used to back calculate drug usage at the community level in the main urban areas of the investigated river basin. The most ubiquitous and abundant compounds in the studied aqueous matrices were cocaine, benzoylecgonine, ephedrine and ecstasy. Lysergic compounds, heroin, its metabolite 6-monoacetyl morphine, and Delta(9)-tetradhydrocannabinol were the substances less frequently detected. Overall, total levels of the studied illicit drugs and metabolites observed in surface water (in the low ng/L range) were one and two orders of magnitude lower than those determined in effluent (in the ng/L range) and influent sewage water (microg/L range), respectively. The investigated STPs showed overall removal efficiencies between 45 and 95%. Some compounds, such as cocaine and amphetamine, were very efficiently eliminated (>90%) whereas others, such as ecstasy, methamphetamine, nor-LSD, and THC-COOH where occasionally not eliminated at all. Drug consumption estimates pointed out cocaine as the most abused drug, followed by cannabis, amphetamine, heroin, ecstasy and methamphetamine, which slightly differs from national official estimates (cannabis, followed by cocaine, ecstasy, amphetamine and heroin). Extrapolation of the consumption data obtained for the studied area to Spain points out a total annual consumption of drugs of abuse of the order of 36 tonnes, which would translate into 1100million Euros in the black market.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
SEEA SOUTHEAST CONSORTIUM FINAL TECHNICAL REPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Block, Timothy; Ball, Kia; Fournier, Ashley
In 2010 the Southeast Energy Efficiency Alliance (SEEA) received a $20 million Energy Efficiency and Conservation Block Grant (EECBG) under the U.S. Department of Energy’s Better Building Neighborhood Program (BBNP). This grant, funded by the American Recovery and Reinvestment Act, also included sub-grantees in 13 communities across the Southeast, known as the Southeast Consortium. The objective of this project was to establish a framework for energy efficiency retrofit programs to create models for replication across the Southeast and beyond. To achieve this goal, SEEA and its project partners focused on establishing infrastructure to develop and sustain the energy efficiency marketmore » in specific localities across the southeast. Activities included implementing minimum training standards and credentials for marketplace suppliers, educating and engaging homeowners on the benefits of energy efficiency through strategic marketing and outreach and addressing real or perceived financial barriers to investments in whole-home energy efficiency through a variety of financing mechanisms. The anticipated outcome of these activities would be best practice models for program design, marketing, financing, data collection and evaluation as well as increased market demand for energy efficiency retrofits and products. The Southeast Consortium’s programmatic impacts along with the impacts of the other BBNP grantees would further the progress towards the overall goal of energy efficiency market transformation. As the primary grantee SEEA served as the overall program administrator and provided common resources to the 13 Southeast Consortium sub-grantees including contracted services for contractor training, quality assurance testing, data collection, reporting and compliance. Sub-grantee programs were located in cities across eight states including Alabama, Florida, Georgia, Louisiana, North Carolina, South Carolina, Tennessee, Virginia and the U.S. Virgin Islands. Each sub-grantee program was designed to address the unique local conditions and population of its community. There was great diversity in programs design, types of financing and incentives, building stock characteristics, climate and partnerships. From 2010 through 2013, SEEA and its sub-grantee programs focused on determining best practices in program administration, workforce development, marketing and consumer education, financing, and utility partnerships. One of the common themes among programs that were most successful in each of these areas was strong partnerships and collaborations with people or organizations in the community. In many instances engaged partners proved to be the key to addressing barriers such as access to financing, workforce development opportunities and access to utility bill data. The most challenging barrier proved to be the act of building a market for energy efficiency where none previously existed. With limited time and resources, educating homeowners of the value in investing in energy efficiency while engaging electric and gas utilities served as a significant barrier for several programs. While there is still much work to be done to continue to transform the energy efficiency market in the Southeast, the programmatic activities led by SEEA and its sub-grantees resulted in 8,180 energy audits and 5,155 energy efficiency retrofits across the Southeast. In total the Southeast Consortium saved an estimated 27,915,655.93 kWh and generated an estimated $ 2,291,965.90 in annual energy cost savings in the region.« less
Han, Buhm; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255
2016-01-01
We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters. PMID:27959559
a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar
NASA Astrophysics Data System (ADS)
Dehnavi, S.; Maghsoudi, Y.
2015-12-01
Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.
Effect of work of adhesion on deep bed filtration process
NASA Astrophysics Data System (ADS)
Przekop, Rafał; Jackiewicz, Anna; WoŻniak, Michał; Gradoń, Leon
2016-06-01
Collection of aerosol particles in the particular steps of the technology of their production, and purification of the air at the workplace and atmospheric environment, requires the efficient method of separation of particulate matter from the carrier gas. There are many papers published in last few years in which the deposition of particles on fibrous collectors is considered, Most of them assume that collisions between particle and collector surface is 100% effective. In this work we study the influence of particles and fiber properties on the deposition efficiency. For the purpose of this work the lattice-Boltzmann model describes fluid dynamics, while the solid particle motion is modeled by the Brownian dynamics. The interactions between particles and surface are modelled using energy balanced oscillatory model. The work of adhesion was estimated using Atomic Force Microscopy.
Condenser design for AMTEC power conversion
NASA Technical Reports Server (NTRS)
Crowley, Christopher J.
1991-01-01
The condenser and the electrodes are the two elements of an alkali metal thermal-to-electric conversion (AMTEC) cell which most greatly affect the energy conversion performance. A condenser is described which accomplishes two critical functions in an AMTEC cell: management of the fluid under microgravity conditions and optimization of conversion efficiency. The first function is achieved via the use of a controlled surface shape, along with drainage grooves and arteries to collect the fluid. Capillary forces manage the fluid in microgravity and dominate hydrostatic effects on the ground so the device is ground-testable. The second function is achieved via a smooth film of highly reflective liquid sodium on the condensing surface, resulting in minimization of parasitic heat losses due to radiation heat transfer. Power conversion efficiencies of 25 percent to 30 percent are estimated with this condenser using present technology for the electrodes.
Effect of work of adhesion on deep bed filtration process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Przekop, Rafał; Jackiewicz, Anna; Gradoń, Leon
2016-06-08
Collection of aerosol particles in the particular steps of the technology of their production, and purification of the air at the workplace and atmospheric environment, requires the efficient method of separation of particulate matter from the carrier gas. There are many papers published in last few years in which the deposition of particles on fibrous collectors is considered, Most of them assume that collisions between particle and collector surface is 100% effective. In this work we study the influence of particles and fiber properties on the deposition efficiency. For the purpose of this work the lattice-Boltzmann model describes fluid dynamics,more » while the solid particle motion is modeled by the Brownian dynamics. The interactions between particles and surface are modelled using energy balanced oscillatory model. The work of adhesion was estimated using Atomic Force Microscopy.« less
Platelet collection efficiencies of three different platelet-rich plasma preparation systems.
Aydin, Fatma; Pancar Yuksel, Esra; Albayrak, Davut
2015-06-01
Different systems have been used for the preparation of platelet-rich plasma (PRP), but platelet collection efficiencies of these systems are not clear. To evaluate the platelet collection efficiencies of three different PRP preparation systems. Blood samples were obtained from the same 16 volunteers for each system. The samples were centrifuged and PRP was prepared by three systems. The ratio of the total number of platelets in PRP to the total number of platelets of the venous blood sample of the patient expressed in percentage was named as platelet collection efficiency and calculated for each system. Mean platelet collection efficiencies were 66.6 (min: 56.9, max: 76.9), 58.3 (min: 27.3, max: 102.8), 50.8 (min: 27.2, max: 73) for top and bottom bag system, system using citrated tube, and the system using tube with Ficoll and cell extraction kit, respectively. Statistically significant difference was found only between the platelet collection efficiencies of systems using the tube with ficoll and cell extraction kit and the top and bottom bag system (p = 0.002). All three systems could be used for PRP preparation, but top and bottom bag system offers a slight advantage over the system using Ficoll and cell extraction kit regarding the platelet collection efficiency.
Sasaki, Tomonari; Tahira, Tomoko; Suzuki, Akari; Higasa, Koichiro; Kukita, Yoji; Baba, Shingo; Hayashi, Kenshi
2001-01-01
We show that single-nucleotide polymorphisms (SNPs) of moderate to high heterozygosity (minor allele frequencies >10%) can be efficiently detected, and their allele frequencies accurately estimated, by pooling the DNA samples and applying a capillary-based SSCP analysis. In this method, alleles are separated into peaks, and their frequencies can be reliably and accurately quantified from their peak heights (SD <1.8%). We found that as many as 40% of publicly available SNPs that were analyzed by this method have widely differing allele frequency distributions among groups of different ethnicity (parents of Centre d'Etude Polymorphisme Humaine families vs. Japanese individuals). These results demonstrate the effectiveness of the present pooling method in the reevaluation of candidate SNPs that have been collected by examination of limited numbers of individuals. The method should also serve as a robust quantitative technique for studies in which a precise estimate of SNP allele frequencies is essential—for example, in linkage disequilibrium analysis. PMID:11083945
Application of MUSLE for the prediction of phosphorus losses.
Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi
2010-01-01
Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.
Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli
2015-01-01
In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726
The determination of total burn surface area: How much difference?
Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P
2013-09-01
Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.
Improving Catastrophe Modeling for Business Interruption Insurance Needs.
Rose, Adam; Huyck, Charles K
2016-10-01
While catastrophe (CAT) modeling of property damage is well developed, modeling of business interruption (BI) lags far behind. One reason is the crude nature of functional relationships in CAT models that translate property damage into BI. Another is that estimating BI losses is more complicated because it depends greatly on public and private decisions during recovery with respect to resilience tactics that dampen losses by using remaining resources more efficiently to maintain business function and to recover more quickly. This article proposes a framework for improving hazard loss estimation for BI insurance needs. Improved data collection that allows for analysis at the level of individual facilities within a company can improve matching the facilities with the effectiveness of individual forms of resilience, such as accessing inventories, relocating operations, and accelerating repair, and can therefore improve estimation accuracy. We then illustrate the difference this can make in a case study example of losses from a hurricane. © 2016 Society for Risk Analysis.
Improved Large-Volume Sampler for the Collection of Bacterial Cells from Aerosol
White, L. A.; Hadley, D. J.; Davids, D. E.; Naylor, R.
1975-01-01
A modified large-volume sampler was demonstrated to be an efficient device for the collection of mono-disperse aerosols of rhodamine B and poly-disperse aerosols of bacterial cells. Absolute efficiency for collection of rhodamine B varied from 100% with 5-μm particles to about 70% with 0.5-μm particles. The sampler concentrated the particles from 950 liters of air into a flow of between 1 and 2 ml of collecting fluid per min. Spores of Bacillus subtilis var. niger were collected at an efficiency of about 82% compared to the collection in the standard AGI-30 sampler. In the most desirable collecting fluids tested, aerosolized cells of Serratia marcescens, Escherichia coli, and Aerobacter aerogenes were collected at comparative efficiencies of approximately 90, 80, and 90%, respectively. The modified sampler has practical application in the study of aerosol transmission of respiratory pathogens. Images PMID:803820
NASA Astrophysics Data System (ADS)
Gentry, D.; Whinnery, J. T.; Ly, V. T.; Travers, S. V.; Sagaga, J.; Dahlgren, R. P.
2017-12-01
Microorganisms play a major role in our biosphere due to their ability to alter water, carbon and other geochemical cycles. Fog and low-level cloud water can play a major role in dispersing and supporting such microbial diversity. An ideal region to gather these microorganisms for characterization is the central coast of California, where dense fog is common. Fog captured from an unmanned aerial vehicle (UAV) at different altitudes will be analyzed to better understand the nature of microorganisms in the lower atmosphere and their potential geochemical impacts. The capture design consists of a square-meter hydrophobic mesh that hangs from a carbon fiber rod attached to a UAV. The DJI M600, a hexacopter, will be utilized as the transport for the payload, the passive impactor collection unit (PICU). The M600 will hover in a fog bank at altitudes between 10 and 100 m collecting water samples via the PICU. A computational flow dynamics (CFD) model will optimize the PICU's size, shape and placement for maximum capture efficiency and to avoid contamination from the UAV downwash. On board, there will also be an altitude, temperature and barometric pressure sensor whose output is logged to an SD card. A scale model of the PICU has been tested with several different types of hydrophobic meshes in a fog chamber at 90-95% humidity; polypropylene was found to capture the fog droplets most efficiently at a rate of .0042 g/cm2/hour. If the amount collected is proportional to the area of mesh, the estimated amount of water collected under optimal fog and flight conditions by the impactor is 21.3 g. If successful, this work will help identify the organisms living in the lower atmosphere as well as their potential geochemical impacts.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
2015-01-01
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956
Achieving energy efficiency during collective communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao
2012-09-13
Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although littlemore » attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run« less
A nanowaveguide platform for collective atom-light interaction
NASA Astrophysics Data System (ADS)
Meng, Y.; Lee, J.; Dagenais, M.; Rolston, S. L.
2015-08-01
We propose a nanowaveguide platform for collective atom-light interaction through evanescent field coupling. We have developed a 1 cm-long silicon nitride nanowaveguide can use evanescent fields to trap and probe an ensemble of 87Rb atoms. The waveguide has a sub-micrometer square mode area and was designed with tapers for high fiber-to-waveguide coupling efficiencies at near-infrared wavelengths (750 nm to 1100 nm). Inverse tapers in the platform adiabatically transfer a weakly guided mode of fiber-coupled light into a strongly guided mode with an evanescent field to trap atoms and then back to a weakly guided mode at the other end of the waveguide. The coupling loss is -1 dB per facet (˜80% coupling efficiency) at 760 nm and 1064 nm, which is estimated by a propagation loss measurement with waveguides of different lengths. The proposed platform has good thermal conductance and can guide high optical powers for trapping atoms in ultra-high vacuum. As an intermediate step, we have observed thermal atom absorption of the evanescent component of a nanowaveguide and have demonstrated the U-wire mirror magneto-optical trap that can transfer atoms to the proximity of the surface.
Blynn, Emily; Ahmed, Saifuddin; Gibson, Dustin; Pariyo, George; Hyder, Adnan A
2017-01-01
In low- and middle-income countries (LMICs), historically, household surveys have been carried out by face-to-face interviews to collect survey data related to risk factors for noncommunicable diseases. The proliferation of mobile phone ownership and the access it provides in these countries offers a new opportunity to remotely conduct surveys with increased efficiency and reduced cost. However, the near-ubiquitous ownership of phones, high population mobility, and low cost require a re-examination of statistical recommendations for mobile phone surveys (MPS), especially when surveys are automated. As with landline surveys, random digit dialing remains the most appropriate approach to develop an ideal survey-sampling frame. Once the survey is complete, poststratification weights are generally applied to reduce estimate bias and to adjust for selectivity due to mobile ownership. Since weights increase design effects and reduce sampling efficiency, we introduce the concept of automated active strata monitoring to improve representativeness of the sample distribution to that of the source population. Although some statistical challenges remain, MPS represent a promising emerging means for population-level data collection in LMICs. PMID:28476726
SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events
Sekara, Vedran; Jonsson, Håkan; Larsen, Jakob Eg; Lehmann, Sune
2017-01-01
We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals’ daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient. PMID:28076375
SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events.
Cuttone, Andrea; Bækgaard, Per; Sekara, Vedran; Jonsson, Håkan; Larsen, Jakob Eg; Lehmann, Sune
2017-01-01
We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient.
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of abundance of rare and patchily distributed species and is particularly appropriate when sampling in all patches is impossible, but a global estimate of abundance is required.
Simulation of Fluid Flow and Collection Efficiency for an SEA Multi-element Probe
NASA Technical Reports Server (NTRS)
Rigby, David L.; Struk, Peter M.; Bidwell, Colin
2014-01-01
Numerical simulations of fluid flow and collection efficiency for a Science Engineering Associates (SEA) multi-element probe are presented. Simulation of the flow field was produced using the Glenn-HT Navier-Stokes solver. Three dimensional unsteady results were produced and then time averaged for the collection efficiency results. Three grid densities were investigated to enable an assessment of grid dependence. Collection efficiencies were generated for three spherical particle sizes, 100, 20, and 5 micron in diameter, using the codes LEWICE3D and LEWICE2D. The free stream Mach number was 0.27, representing a velocity of approximately 86 ms. It was observed that a reduction in velocity of about 15-20 occurred as the flow entered the shroud of the probe.Collection efficiency results indicate a reduction in collection efficiency as particle size is reduced. The reduction with particle size is expected, however, the results tended to be lower than previous results generated for isolated two-dimensional elements. The deviation from the two-dimensional results is more pronounced for the smaller particles and is likely due to the effect of the protective shroud.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Ding, Hongyan; Lin, Ling; Wang, Yimin; Guo, Xin
2017-12-01
A fiber is usually used as a probe in visible and near-infrared diffuse spectra measurement. However, the use of different fiber probes in the same measurement may cause data mismatch problems. Our group has researched the influence of the parameters of fiber probe, including the aperture angle, on the diffuse spectrum by a modified Monte Carlo model. To eliminate the influence of the aperture angle, we proposed a fitted equation of correction coefficient to correct its difference in practical range. However, we did not discuss the limitation of this method. In this work, we explored the collection efficiency in different optical environment with Monte Carlo simulation method, and find the suitable conditions-weak absorbing and strong scattering media, for the proposed collection efficiency. Furthermore, we tried to explain the stability of the collection efficiency in this condition. This work gives suitable conditions for the collection efficiency. The use of collection efficiency can help reduce the influence of different measurement systems and is also helpful to the model translation.
Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure
NASA Astrophysics Data System (ADS)
Liu, Chun; Li, Zhengning; Zhou, Yuan
2016-06-01
Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.
Dickens, Jade M.; Forbes, Brandon T.; Cobean, Dylan S.; Tadayon, Saeid
2011-01-01
An indirect method for estimating irrigation withdrawals is presented and results are compared to the 2005 USGS-reported irrigation withdrawals for selected States. This method is meant to demonstrate a way to check data reported or received from a third party, if metered data are unavailable. Of the 11 States where this method was applied, 8 States had estimated irrigation withdrawals that were within 15 percent of what was reported in the 2005 water-use compilation, and 3 States had estimated irrigation withdrawals that were more than 20 percent of what was reported in 2005. Recommendations for improving estimates of irrigated acreage and irrigation withdrawals also are presented in this report. Conveyance losses and irrigation-system efficiencies should be considered in order to achieve a more accurate representation of irrigation withdrawals. Better documentation of data sources and methods used can help lead to more consistent information in future irrigation water-use compilations. Finally, a summary of data sources and methods used to estimate irrigated acreage and irrigation withdrawals for the 2000 and 2005 compilations for each WSC is presented in appendix 1.
Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas
NASA Astrophysics Data System (ADS)
Harabi, F.; Akkar, S.; Gharsallah, A.
2016-07-01
Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.
The costs and cost-efficiency of providing food through schools in areas of high food insecurity.
Gelli, Aulo; Al-Shaiba, Najeeb; Espejo, Francisco
2009-03-01
The provision of food in and through schools has been used to support the education, health, and nutrition of school-aged children. The monitoring of financial inputs into school health and nutrition programs is critical for a number of reasons, including accountability, transparency, and equity. Furthermore, there is a gap in the evidence on the costs, cost-efficiency, and cost-effectiveness of providing food through schools, particularly in areas of high food insecurity. To estimate the programmatic costs and cost-efficiency associated with providing food through schools in food-insecure, developing-country contexts, by analyzing global project data from the World Food Programme (WFP). Project data, including expenditures and number of schoolchildren covered, were collected through project reports and validated through WFP Country Office records. Yearly project costs per schoolchild were standardized over a set number of feeding days and the amount of energy provided by the average ration. Output metrics, such as tonnage, calories, and micronutrient content, were used to assess the cost-efficiency of the different delivery mechanisms. The average yearly expenditure per child, standardized over a 200-day on-site feeding period and an average ration, excluding school-level costs, was US$21.59. The costs varied substantially according to choice of food modality, with fortified biscuits providing the least costly option of about US$11 per year and take-home rations providing the most expensive option at approximately US$52 per year. Comparisons across the different food modalities suggested that fortified biscuits provide the most cost-efficient option in terms of micronutrient delivery (particularly vitamin A and iodine), whereas on-site meals appear to be more efficient in terms of calories delivered. Transportation and logistics costs were the main drivers for the high costs. The choice of program objectives will to a large degree dictate the food modality (biscuits, cooked meals, or take-home rations) and associated implementation costs. Fortified biscuits can provide substantial nutritional inputs at a fraction of the cost of school meals, making them an appealing option for service delivery in food-insecure contexts. Both costs and effects should be considered carefully when designing the appropriate school-based intervention. The costs estimates in this analysis do not include all school-level costs and are therefore lower-bound estimates of full implementation costs.
Personalized recommendation based on unbiased consistence
NASA Astrophysics Data System (ADS)
Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao
2015-08-01
Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Efficient fault diagnosis of helicopter gearboxes
NASA Technical Reports Server (NTRS)
Chin, H.; Danai, K.; Lewicki, D. G.
1993-01-01
Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.
Consistent and efficient processing of ADCP streamflow measurements
Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.
Tsumori, Nobuhiro; Takahashi, Motoki; Sakuma, Yoshiki; Saiki, Toshiharu
2011-10-10
We examined the near-field collection efficiency of near-infrared radiation for an aperture probe. We used InAs quantum dots as ideal point light sources with emission wavelengths ranging from 1.1 to 1.6 μm. We experimentally investigated the wavelength dependence of the collection efficiency and compared the results with computational simulations that modeled the actual probe structure. The observed degradation in the collection efficiency is attributed to the cutoff characteristics of the gold-clad tapered waveguide, which approaches an ideal conductor at near-infrared wavelengths. © 2011 Optical Society of America
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
A novel estimating method for steering efficiency of the driver with electromyography signals
NASA Astrophysics Data System (ADS)
Liu, Yahui; Ji, Xuewu; Hayama, Ryouhei; Mizuno, Takahiro
2014-05-01
The existing research of steering efficiency mainly focuses on the mechanism efficiency of steering system, aiming at designing and optimizing the mechanism of steering system. In the development of assist steering system especially the evaluation of its comfort, the steering efficiency of driver physiological output usually are not considered, because this physiological output is difficult to measure or to estimate, and the objective evaluation of steering comfort therefore cannot be conducted with movement efficiency perspective. In order to take a further step to the objective evaluation of steering comfort, an estimating method for the steering efficiency of the driver was developed based on the research of the relationship between the steering force and muscle activity. First, the steering forces in the steering wheel plane and the electromyography (EMG) signals of the primary muscles were measured. These primary muscles are the muscles in shoulder and upper arm which mainly produced the steering torque, and their functions in steering maneuver were identified previously. Next, based on the multiple regressions of the steering force and EMG signals, both the effective steering force and the total force capacity of driver in steering maneuver were calculated. Finally, the steering efficiency of driver was estimated by means of the estimated effective force and the total force capacity, which represented the information of driver physiological output of the primary muscles. This research develops a novel estimating method for driver steering efficiency of driver physiological output, including the estimation of both steering force and the force capacity of primary muscles with EMG signals, and will benefit to evaluate the steering comfort with an objective perspective.
Measuring the efficiency of zakat collection process using data envelopment analysis
NASA Astrophysics Data System (ADS)
Hamzah, Ahmad Aizuddin; Krishnan, Anath Rau
2016-10-01
It is really necessary for each zakat institution in the nation to timely measure and understand their efficiency in collecting zakat for the sake of continuous betterment. Pusat Zakat Sabah, Malaysia which has kicked off its operation in early of 2007, is not excused from this obligation as well. However, measuring the collection efficiency is not a very easy task as it usually incorporates the consideration of multiple inputs or/and outputs. This paper sequentially employed three data envelopment analysis models, namely Charnes-Cooper-Rhodes (CCR) primal model, CCR dual model, and slack based model to quantitatively evaluate the efficiency of zakat collection in Sabah across the year of 2007 up to 2015 by treating each year as a decision making unit. The three models were developed based on two inputs (i.e. number of zakat branches and number of staff) and one output (i.e. total collection). The causes for not achieving efficiency and the suggestions on how the efficiency in each year could have been improved were disclosed.
Potential Organ-Donor Supply and Efficiency of Organ Procurement Organizations
Guadagnoli, Edward; Christiansen, Cindy L.; Beasley, Carol L.
2003-01-01
The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs. PMID:14628403
Potential organ-donor supply and efficiency of organ procurement organizations.
Guadagnoli, Edward; Christiansen, Cindy L; Beasley, Carol L
2003-01-01
The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Ibanescu, Dumitrita; Cailean Gavrilescu, Daniela; Teodosiu, Carmen; Fiore, Silvia
2018-03-01
The assessment of waste management systems for electrical and electronic equipment (WEEE) from developed economies (Germany, Sweden and Italy) and developing countries (Romania and Bulgaria), is discussed covering the period 2007-2014. The WEEE management systems profiles are depicted by indicators correlated to WEEE life cycle stages: collection, transportation and treatment. The sustainability of national WEEE management systems in terms of greenhouse gas emissions is presented, together with the greenhouse gas efficiency indicator that underlines the efficiency of WEEE treatment options. In the countries comparisons, the key elements are: robust versus fragile economies, the overall waste management performance and the existence/development of suitable management practices on WEEE. Over the life cycle perspective, developed economies (Germany, Sweden and Italy) manage one order of magnitude higher quantities of WEEE compared to developing countries (Romania and Bulgaria). Although prevention and reduction measures are encouraged, all WEEE quantities were larger in 2013, than in 2007. In 2007-2014, developed economies exceed the annual European collection target of 4 kg WEEE/capita, while collection is still difficult in developing countries. If collection rates are estimated in relationship with products placed on market, than similar values are registered in Sweden and Bulgaria, followed by Germany and Italy and lastly Romania. WEEE transportation shows different patterns among countries, with Italy as the greatest exporter (in 2014), while Sweden treats the WEEE nationally. WEEE reuse is a common practice in Germany, Sweden (from 2009) and Bulgaria (from 2011). By 2014, recycling was the most preferred WEEE treatment option, with the same kind of rates performance, over 80%, irrespective of the country, with efforts in each of the countries in developing special collection points, recycling facilities and support instruments. The national total and the recycling carbon footprints of WEEE are lower in 2013 than in 2007 for each country, the order in reducing the environmental impacts being: Germany, Italy, Sweden, Bulgaria and Romania. The negative values indicate savings in greenhouse gas emissions. In 2013, the GHG efficiency shows no differences of the WEEE management in the developed and developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extracting Maximum Total Water Levels from Video "Brightest" Images
NASA Astrophysics Data System (ADS)
Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.
2016-02-01
An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in validating numerical hydrodynamic models and improving coastal change predictions.
Kuroda, T; Noma, H; Naito, C; Tada, M; Yamanaka, H; Takemura, T; Nin, K; Yoshihara, H
2013-01-01
Development of a clinical sensor network system that automatically collects vital sign and its supplemental data, and evaluation the effect of automatic vital sensor value assignment to patients based on locations of sensors. The sensor network estimates the data-source, a target patient, from the position of a vital sign sensor obtained from a newly developed proximity sensing system. The proximity sensing system estimates the positions of the devices using a Bluetooth inquiry process. Using Bluetooth access points and the positioning system newly developed in this project, the sensor network collects vital sign and its 4W (who, where, what, and when) supplemental data from any Bluetooth ready vital sign sensors such as Continua-ready devices. The prototype was evaluated in a pseudo clinical setting at Kyoto University Hospital using a cyclic paired comparison and statistical analysis. The result of the cyclic paired analysis shows the subjects evaluated the proposed system is more effective and safer than POCS as well as paper-based operation. It halves the times for vital signs input and eliminates input errors. On the other hand, the prototype failed in its position estimation for 12.6% of all attempts, and the nurses overlooked half of the errors. A detailed investigation clears that an advanced interface to show the system's "confidence", i.e. the probability of estimation error, must be effective to reduce the oversights. This paper proposed a clinical sensor network system that relieves nurses from vital signs input tasks. The result clearly shows that the proposed system increases the efficiency and safety of the nursing process both subjectively and objectively. It is a step toward new generation of point of nursing care systems where sensors take over the tasks of data input from the nurses.
Vehicle routing for the eco-efficient collection of household plastic waste.
Bing, Xiaoyun; de Keizer, Marlies; Bloemhof-Ruwaard, Jacqueline M; van der Vorst, Jack G A J
2014-04-01
Plastic waste is a special category of municipal solid waste. Plastic waste collection is featured with various alternatives of collection methods (curbside/drop-off) and separation methods (source-/post-separation). In the Netherlands, the collection routes of plastic waste are the same as those of other waste, although plastic is different than other waste in terms of volume to weight ratio. This paper aims for redesigning the collection routes and compares the collection options of plastic waste using eco-efficiency as performance indicator. Eco-efficiency concerns the trade-off between environmental impacts, social issues and costs. The collection problem is modeled as a vehicle routing problem. A tabu search heuristic is used to improve the routes. Collection alternatives are compared by a scenario study approach. Real distances between locations are calculated with MapPoint. The scenario study is conducted based on real case data of the Dutch municipality Wageningen. Scenarios are designed according to the collection alternatives with different assumptions in collection method, vehicle type, collection frequency and collection points, etc. Results show that the current collection routes can be improved in terms of eco-efficiency performance by using our method. The source-separation drop-off collection scenario has the best performance for plastic collection assuming householders take the waste to the drop-off points in a sustainable manner. The model also shows to be an efficient decision support tool to investigate the impacts of future changes such as alternative vehicle type and different response rates. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data
NASA Astrophysics Data System (ADS)
Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio
2016-02-01
Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnthouse, L. W.; Van Winkle, W.; Golumbek, J.
1982-04-01
This volume includes a series of four exhibits relating to impacts of impingement on fish populations, together with a collection of critical evaluations of testimony prepared for the utilities by their consultants. The first exhibit is a quantitative evaluation of four sources of bias (collection efficiency, reimpingement, impingement on inoperative screens, and impingement survival) affecting estimates of the number of fish killed at Hudson River power plants. The two following exhibits contain, respectively, a detailed assessment of the impact of impingement on the Hudson River white perch population and estimates of conditional impingement mortality rates for seven Hudson River fishmore » populations. The fourth exhibit is an evaluation of the engineering feasibility and potential biological effectiveness of several types of modified intake structures proposed as alternatives to cooling towers for reducing impingement impacts. The remainder of Volume II consists of critical evaluations of the utilities' empirical evidence for the existence of density-dependent growth in young-of-the-year striped bass and white perch, of their estimate of the age-composition of the striped bass spawning stock in the Hudson River, and of their use of the Lawler, Matusky, and Skelly (LMS) Real-Time Life Cycle Model to estimate the impact of entrainment and impingement on the Hudson River striped bass population.« less
Increasing the Confidence of African Carbon Cycle Assessments
NASA Astrophysics Data System (ADS)
Ardö, Jonas
2016-04-01
Scarcity of in situ measurements of greenhouse gas (GHG) fluxes hamper calibration and validation of assessments of carbon budgets in Africa. It limits essential studies of ecosystem function and ecosystem processes. The wide range reported net primary production (NPP) and gross primary production (GPP) for continental African is partly a function of the uncertainty originating from this data scarcity. GPP estimates, based on vegetation models and remote sensing based models, range from ~17 to ~40 Pg C yr-1 and NPP estimates roughly range from ~7 to ~20 Pg C yr-1 for continental Africa. According to the MOD17 product does Africa contribute about 23 % of the global GPP and about 25 % of the global NPP. These percentages have recently increased slightly. Differences in modeled carbon use efficiency (i.e. the NPP/GPP ratio) further enhance the uncertainty caused by low spatial resolution driver data sets when deriving NPP from GPP. Current substantial uncertainty in vegetation productivity estimates for Africa (both magnitudes and carbon use efficiency) may be reduced by increased abundance and availability of in situ collected field data including meteorology, radiation, spectral properties, GHG fluxes as well as long term ecological field experiments. Current measurements of GHGs fluxes in Africa are sparse and lacking impressive coordination. The European Fluxes Database Cluster includes ~24 African sites with carbon flux data, most of them with a small amount of data in short time series. Large and diverse biomes such as the evergreen broad leafed forest are under-represented whereas savannas are slightly better represented. USA for example, with 171 flux site listed in FLUXNET has a flux site density of 17 sites per million km2, whereas Africa has density of 0.8 sites per million km2. Increased and coordinated collection of data on fluxes of GHGs, ecosystem properties and processes, both through advanced micro meteorological measurements and through cost effective straightforward field experiments can contribute to reduce the uncertainty in quantification of the African carbon budget. Climatic adaptation of resource production systems such as agriculture, pastoralism, agroforestry and forestry, could also benefit from additional knowledge gained from local studies GHG fluxes, ecology, ecosystem services, plant physiology and management. It seems reasonable that the COP21 funding enabling countries to adapt to the impacts of climate change also support measurements and data collection, hence providing a knowledge-based backing of adaptation to climate change in Africa.
Highly efficient lithium composite anode with hydrophobic molten salt in seawater
NASA Astrophysics Data System (ADS)
Zhang, Yancheng; Urquidi-Macdonald, Mirna
A lithium composite anode (lithium/1-butyl-3-methyl-imidazoleum hexafluorophosphate (BMI +PF 6-)/4-VLZ) for primary lithium/seawater semi-fuel-cells is proposed to reduce lithium-water parasitic reaction and, hence, increase the lithium anodic efficiency up to 100%. The lithium composite anode was activated when in contact with artificial seawater (3% NaCl solution) and the output was a stable anodic current density at 0.2 mA/cm 2, which lasted about 10 h under potentiostatic polarization at +0.5 V versus open circuit potential (OCP); the anodic efficiency was indirectly measured to be 100%. With time, a small traces of water diffused through the hydrophobic molten salt, BMI +PF 6-, reached the lithium interface and formed a double layer film (LiH/LiOH). Accordingly, the current density decreased and the anodic efficiency was estimated to be 90%. The hypothesis of small traces of water penetrating the molten salt and reaching the lithium anode—after several hours of operation—is supported by the collected experimental current density and hydrogen evolution, electrochemical impedance spectrum analysis, and non-mechanistic interface film modeling of lithium/BMI +PF 6-.
Energy Efficiency of Biogas Produced from Different Biomass Sources
NASA Astrophysics Data System (ADS)
Begum, Shahida; Nazri, A. H.
2013-06-01
Malaysia has different sources of biomass like palm oil waste, agricultural waste, cow dung, sewage waste and landfill sites, which can be used to produce biogas and as a source of energy. Depending on the type of biomass, the biogas produced can have different calorific value. At the same time the energy, being used to produce biogas is dependent on transportation distance, means of transportation, conversion techniques and for handling of raw materials and digested residues. An energy systems analysis approach based on literature is applied to calculate the energy efficiency of biogas produced from biomass. Basically, the methodology is comprised of collecting data, proposing locations and estimating the energy input needed to produce biogas and output obtained from the generated biogas. The study showed that palm oil and municipal solid waste is two potential sources of biomass. The energy efficiency of biogas produced from palm oil residues and municipal solid wastes is 1.70 and 3.33 respectively. Municipal solid wastes have the higher energy efficiency due to less transportation distance and electricity consumption. Despite the inherent uncertainties in the calculations, it can be concluded that the energy potential to use biomass for biogas production is a promising alternative.
Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H
2017-07-01
Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.
Barhoum, Erek; Johnston, Richard; Seibel, Eric
2005-09-19
An optical model of an ultrathin scanning fiber endoscope was constructed using a non-sequential ray tracing program and used to study the relationship between fiber deflection and collection efficiency from tissue. The problem of low collection efficiency of confocal detection through the scanned single-mode optical fiber was compared to non-confocal cladding detection. Collection efficiency is 40x greater in the non-confocal versus the confocal geometry due to the majority of rays incident on the core being outside the numerical aperture. Across scan angles of 0 to 30o, collection efficiency decreases from 14.4% to 6.3% for the non-confocal design compared to 0.34% to 0.10% for the confocal design. Non-confocality provides higher and more uniform collection efficiencies at larger scan angles while sacrificing the confocal spatial filter.
Efficiency of aerosol collection on wires exposed in the stratosphere
NASA Technical Reports Server (NTRS)
Lem, H. Y.; Farlow, N. H.
1979-01-01
The theory of inertial impaction is briefly presented. Stratospheric aerosol research experiments were performed duplicating Wong et al. experiments. The use of the curve of inertial parameters vs particle collection efficiency, derived from Wong et al., was found to be justified. The results show that stratospheric aerosol particles of all sizes are collectible by wire impaction technique. Curves and tables are presented and used to correct particle counts for collection efficiencies less than 100%.
de Freitas, Normanda L; Gonçalves, José A S; Innocentini, Murilo D M; Coury, José R
2006-08-25
The performance of double-layered ceramic filters for aerosol filtration at high temperatures was evaluated in this work. The filtering structure was composed of two layers: a thin granular membrane deposited on a reticulate ceramic support of high porosity. The goal was to minimize the high pressure drop inherent of granular structures, without decreasing their high collection efficiency for small particles. The reticulate support was developed using the technique of ceramic replication of polyurethane foam substrates of 45 and 75 pores per inch (ppi). The filtering membrane was prepared by depositing a thin layer of granular alumina-clay paste on one face of the support. Filters had their permeability and fractional collection efficiency analyzed for filtration of an airborne suspension of phosphatic rock in temperatures ranging from ambient to 700 degrees C. Results revealed that collection efficiency decreased with gas temperature and was enhanced with filtration time. Also, the support layer influenced the collection efficiency: the 75 ppi support was more effective than the 45 ppi. Particle collection efficiency dropped considerably for particles below 2 microm in diameter. The maximum collection occurred for particle diameters of approximately 3 microm, and decreased again for diameters between 4 and 8 microm. Such trend was successfully represented by the proposed correlation, which is based on the classical mechanisms acting on particle collection. Inertial impaction seems to be the predominant collection mechanism, with particle bouncing/re-entrainment acting as detachment mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Lengths of nephron tubule segments and collecting ducts in the CD-1 mouse kidney: an ontogeny study.
Walton, Sarah L; Moritz, Karen M; Bertram, John F; Singh, Reetu R
2016-11-01
The kidney continues to mature postnatally, with significant elongation of nephron tubules and collecting ducts to maintain fluid/electrolyte homeostasis. The aim of this project was to develop methodology to estimate lengths of specific segments of nephron tubules and collecting ducts in the CD-1 mouse kidney using a combination of immunohistochemistry and design-based stereology (vertical uniform random sections with cycloid arc test system). Lengths of tubules were determined at postnatal day 21 (P21) and 2 and 12 mo of age and also in mice fed a high-salt diet throughout adulthood. Immunohistochemistry was performed to identify individual tubule segments [aquaporin-1, proximal tubules (PT) and thin descending limbs of Henle (TDLH); uromodulin, distal tubules (DT); aquaporin-2, collecting ducts (CD)]. All tubular segments increased significantly in length between P21 and 2 mo of age (PT, 602% increase; DT, 200% increase; TDLH, 35% increase; CD, 53% increase). However, between 2 and 12 mo, a significant increase in length was only observed for PT (76% increase in length). At 12 mo of age, kidneys of mice on a high-salt diet demonstrated a 27% greater length of the TDLH, but no significant change in length was detected for PT, DT, and CD compared with the normal-salt group. Our study demonstrates an efficient method of estimating lengths of specific segments of the renal tubular system. This technique can be applied to examine structure of the renal tubules in combination with the number of glomeruli in the kidney in models of altered renal phenotype. Copyright © 2016 the American Physiological Society.
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
Madi, Mahmoud K; Karameh, Fadi N
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Arvidsson, Tommy; Bergström, Lars; Kreuger, Jenny
2011-06-01
In this study, the collecting efficiency of different samplers of airborne drift was compared both in wind tunnel and in field experiments. The aim was to select an appropriate sampler for collecting airborne spray drift under field conditions. The wind tunnel study examined three static samplers and one dynamic sampler. The dynamic sampler had the highest overall collecting efficiency. Among the static samplers, the pipe cleaner collector had the highest efficiency. These two samplers were selected for evaluation in the subsequent field study. Results from 29 individual field experiments showed that the pipe cleaner collector on average had a 10% lower collecting efficiency than the dynamic sampler. However, the deposits on the pipe cleaners generally were highest at the 0.5 m level, and for the dynamic sampler at the 1 m level. It was concluded from the wind tunnel part of the study that the amount of drift collected on the static collectors had a more strongly positive correlation with increasing wind speed compared with the dynamic sampler. In the field study, the difference in efficiency between the two types of collector was fairly small. As the difference in collecting efficiency between the different types of sampler was small, the dynamic sampler was selected for further measurements of airborne drift under field conditions owing to its more well-defined collecting area. This study of collecting efficiency of airborne spray drift of static and dynamic samplers under field conditions contributes to increasing knowledge in this field of research. Copyright © 2011 Society of Chemical Industry.
Trophic transfer efficiency of DDT to lake trout (Salvelinus namaycush) from their prey
Madenjian, C.P.; O'Connor, D.V.
2004-01-01
The objective of our study was to determine the efficiency with which lake trout retain DDT from their natural food. Our estimate of DDT assimilation efficiency would represent the most realistic estimate, to date, for use in risk assessment models.
Influence of multidroplet size distribution on icing collection efficiency
NASA Technical Reports Server (NTRS)
Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.
1983-01-01
Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.
Spatiotemporal movement planning and rapid adaptation for manual interaction.
Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan
2013-01-01
Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.
Liu, Chunping; Laporte, Audrey; Ferguson, Brian S
2008-09-01
In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.
Using Reflectance Measurements to Determine Ecosystem Light Use Efficiency
NASA Astrophysics Data System (ADS)
Huemmrich, K. F.; Middleton, E. M.; Hall, F. G.; Knox, R. G.; Walter-Shea, E.; Verma, S. B.
2006-05-01
Understanding the dynamics of the global carbon cycle requires an accurate determination of the spatial and temporal distribution of photosynthetic CO2 uptake by terrestrial vegetation. Remote sensing observations may provide the spatially extensive observations required for this type of analysis. A light use efficiency model is one approach to modeling carbon fluxes driven by remotely sensed inputs. Photosynthetic down-regulation has been associated with changes in the apparent spectral reflectance of leaves and these responses may permit the estimation of ecosystem photosynthetic light use efficiency (LUE). At a prairie site in Oklahoma, CO2 flux measurements from an eddy covariance system along with biophysical data were collected through 1998 and 1999. During the growing seasons hyperspectral reflectance measurements were collected in nearby plots at multiple times in a day at approximately monthly intervals. LUE is calculated as the ratio of carbon uptake by the ecosystem and the fraction of photosynthetically active radiation (PAR) absorbed by green leaves. The LUE values are compared with reflectance indexes examining how relationships vary over hours, months, and years. For this system a number of different reflectance indexes have been found to correlate with LUE; including the Photochemical Reflectance Index (PRI) and the Structure Independent Pigment Index (SIPI); as well as spectral first derivatives at 460, 550, and 615nm; and second derivatives at 510 and 620nm. This methodology provides a nondestructive, repeatable, direct comparison between ecosystem carbon fluxes and spectral reflectance at scales relevant to remote sensing.
Effect of air flow on tubular solar still efficiency
2013-01-01
Background An experimental work was reported to estimate the increase in distillate yield for a compound parabolic concentrator-concentric tubular solar still (CPC-CTSS). The CPC dramatically increases the heating of the saline water. A novel idea was proposed to study the characteristic features of CPC for desalination to produce a large quantity of distillate yield. A rectangular basin of dimension 2 m × 0.025 m × 0.02 m was fabricated of copper and was placed at the focus of the CPC. This basin is covered by two cylindrical glass tubes of length 2 m with two different diameters of 0.02 m and 0.03 m. The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. Findings The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. Conclusions On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. PMID:23587020
Effect of air flow on tubular solar still efficiency.
Thirugnanasambantham, Arunkumar; Rajan, Jayaprakash; Ahsan, Amimul; Kandasamy, Vinothkumar
2013-01-01
An experimental work was reported to estimate the increase in distillate yield for a compound parabolic concentrator-concentric tubular solar still (CPC-CTSS). The CPC dramatically increases the heating of the saline water. A novel idea was proposed to study the characteristic features of CPC for desalination to produce a large quantity of distillate yield. A rectangular basin of dimension 2 m × 0.025 m × 0.02 m was fabricated of copper and was placed at the focus of the CPC. This basin is covered by two cylindrical glass tubes of length 2 m with two different diameters of 0.02 m and 0.03 m. The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. THE EXPERIMENTAL STUDY WAS OPERATED WITH TWO MODES: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively.
Potentials for Platooning in U.S. Highway Freight Transport: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muratori, Matteo; Holden, Jacob; Lammert, Michael
2017-03-15
Smart technologies enabling connection among vehicles and between vehicles and infrastructure as well as vehicle automation to assist human operators are receiving significant attention as means for improving road transportation systems by reducing fuel consumption - and related emissions - while also providing additional benefits through improving overall traffic safety and efficiency. For truck applications, currently responsible for nearly three-quarters of the total U.S. freight energy use and greenhouse gas (GHG) emissions, platooning has been identified as an early feature for connected and automated vehicles (CAVs) that could provide significant fuel savings and improved traffic safety and efficiency without radicalmore » design or technology changes compared to existing vehicles. A statistical analysis was performed based on a large collection of real-world U.S. truck usage data to estimate the fraction of total miles that are technically suitable for platooning. In particular, our analysis focuses on estimating 'platoonable' mileage based on overall highway vehicle use and prolonged high-velocity traveling, establishing that about 65% of the total miles driven by combination trucks could be driven in platoon formation, leading to a 4% reduction in total truck fuel consumption. This technical potential for 'platoonable' miles in the U.S. provides an upper bound for scenario analysis considering fleet willingness to platoon as an estimate of overall benefits of early adoption of CAV technologies. A benefit analysis is proposed to assess the overall potential for energy savings and emissions mitigation by widespread implementation of highway platooning for trucks.« less
Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-01-01
Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-03-01
Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.
Choi, Dong Yun; An, Eun Jeong; Jung, Soo-Ho; Song, Dong Keun; Oh, Yong Suk; Lee, Hyung Woo; Lee, Hye Moon
2018-04-10
Through the direct decomposition of an Al precursor ink AlH 3 {O(C 4 H 9 ) 2 }, we fabricated an Al-coated conductive fiber filter for the efficient electrostatic removal of airborne particles (>99%) with a low pressure drop (~several Pascals). The effects of the electrical and structural properties of the filters were investigated in terms of collection efficiency, pressure drop, and particle deposition behavior. The collection efficiency did not show a significant correlation with the extent of electrical conductivity, as the filter is electrostatically charged by the metallic Al layers forming electrical networks throughout the fibers. Most of the charged particles were collected via surface filtration by Coulombic interactions; consequently, the filter thickness had little effect on the collection efficiency. Based on simulations of various fiber structures, we found that surface filtration can transition to depth filtration depending on the extent of interfiber distance. Therefore, the effects of structural characteristics on collection efficiency varied depending on the degree of the fiber packing density. This study will offer valuable information pertaining to the development of a conductive metal/polymer composite air filter for an energy-efficient and high-performance electrostatic filtration system.
Westbrook, Johanna I; Ampt, Amanda
2009-04-01
Evidence regarding how health information technologies influence clinicians' patterns of work and support efficient practices is limited. Traditional paper-based data collection methods are unable to capture clinical work complexity and communication patterns. The use of electronic data collection tools for such studies is emerging yet is rarely assessed for reliability or validity. Our aim was to design, apply and test an observational method which incorporated the use of an electronic data collection tool for work measurement studies which would allow efficient, accurate and reliable data collection, and capture greater degrees of work complexity than current approaches. We developed an observational method and software for personal digital assistants (PDAs) which captures multiple dimensions of clinicians' work tasks, namely what task, with whom, and with what; tasks conducted in parallel (multi-tasking); interruptions and task duration. During field-testing over 7 months across four hospital wards, fifty-two nurses were observed for 250 h. Inter-rater reliability was tested and validity was measured by (i) assessing whether observational data reflected known differences in clinical role work tasks and (ii) by comparing observational data with participants' estimates of their task time distribution. Observers took 15-20 h of training to master the method and data collection process. Only 1% of tasks observed did not match the classification developed and were classified as 'other'. Inter-rater reliability scores of observers were maintained at over 85%. The results discriminated between the work patterns of enrolled and registered nurses consistent with differences in their roles. Survey data (n=27) revealed consistent ratings of tasks by nurses, and their rankings of most to least time-consuming tasks were significantly correlated with those derived from the observational data. Over 40% of nurses' time was spent in direct care or professional communication, with 11.8% of time spent multi-tasking. Nurses were interrupted approximately every 49 min. One quarter of interruptions occurred while nurses were preparing or administering medications. This method efficiently produces reliable and valid data. The multi-dimensional nature of the data collected provides greater insights into patterns of clinicians' work and communication than has previously been possible using other methods.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...
Code of Federal Regulations, 2011 CFR
2011-10-01
... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-21
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy; Agency Information Collection Extension AGENCY: Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy (DOE..., DC 20503 And to Mr. Dana O'Hara, Office of Energy Efficiency and Renewable Energy (EE- 2G), U.S...
Hata, Akihiko; Katayama, Hiroyuki; Kojima, Keisuke; Sano, Shoichi; Kasuga, Ikuro; Kitajima, Masaaki; Furumai, Hiroaki
2014-01-15
Rainfall events can introduce large amount of microbial contaminants including human enteric viruses into surface water by intermittent discharges from combined sewer overflows (CSOs). The present study aimed to investigate the effect of rainfall events on viral loads in surface waters impacted by CSO and the reliability of molecular methods for detection of enteric viruses. The reliability of virus detection in the samples was assessed by using process controls for virus concentration, nucleic acid extraction and reverse transcription (RT)-quantitative PCR (qPCR) steps, which allowed accurate estimation of virus detection efficiencies. Recovery efficiencies of poliovirus in river water samples collected during rainfall events (<10%) were lower than those during dry weather conditions (>10%). The log10-transformed virus concentration efficiency was negatively correlated with suspended solid concentration (r(2)=0.86) that increased significantly during rainfall events. Efficiencies of DNA extraction and qPCR steps determined with adenovirus type 5 and a primer sharing control, respectively, were lower in dry weather. However, no clear relationship was observed between organic water quality parameters and efficiencies of these two steps. Observed concentrations of indigenous enteric adenoviruses, GII-noroviruses, enteroviruses, and Aichi viruses increased during rainfall events even though the virus concentration efficiency was presumed to be lower than in dry weather. The present study highlights the importance of using appropriate process controls to evaluate accurately the concentration of water borne enteric viruses in natural waters impacted by wastewater discharge, stormwater, and CSOs. © 2013.
Efficient method for assessing channel instability near bridges
Robinson, Bret A.; Thompson, R.E.
1993-01-01
Efficient methods for data collection and processing are required to complete channel-instability assessments at 5,600 bridge sites in Indiana at an affordable cost and within a reasonable time frame while maintaining the quality of the assessments. To provide this needed efficiency and quality control, a data-collection form was developed that specifies the data to be collected and the order of data collection. This form represents a modification of previous forms that grouped variables according to type rather than by order of collection. Assessments completed during two field seasons showed that greater efficiency was achieved by using a fill-in-the-blank form that organizes the data to be recorded in a specified order: in the vehicle, from the roadway, in the upstream channel, under the bridge, and in the downstream channel.
Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C
2018-01-01
Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.
1977-01-01
The usefulness of vee-trough concentrators in improving the efficiency and reducing the cost of collectors assembled from evacuated tube receivers was studied in the vee-trough/vacuum tube collector (VTVTC) project. The VTVTC was analyzed rigorously and various mathematical models were developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized FEP Teflon. Tests were run at temperatures ranging from 95 to 180 C. Vee-trough collector efficiencies of 35 to 40% were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Predicted daily useful heat collection and efficiency values are presented for a year's duration of operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with a complete economic evaluation.
NASA Astrophysics Data System (ADS)
Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix
2017-12-01
Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.
Ultimate limits for quantum magnetometry via time-continuous measurements
NASA Astrophysics Data System (ADS)
Albarelli, Francesco; Rossi, Matteo A. C.; Paris, Matteo G. A.; Genoni, Marco G.
2017-12-01
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as 1/J, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling 1/{J}2 and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
Wang, Xinxin; Lu, Xingmei; Zhou, Qing; Zhao, Yongsheng; Li, Xiaoqian; Zhang, Suojiang
2017-08-02
Refractive index is one of the important physical properties, which is widely used in separation and purification. In this study, the refractive index data of ILs were collected to establish a comprehensive database, which included about 2138 pieces of data from 1996 to 2014. The Group Contribution-Artificial Neural Network (GC-ANN) model and Group Contribution (GC) method were employed to predict the refractive index of ILs at different temperatures from 283.15 K to 368.15 K. Average absolute relative deviations (AARD) of the GC-ANN model and the GC method were 0.179% and 0.628%, respectively. The results showed that a GC-ANN model provided an effective way to estimate the refractive index of ILs, whereas the GC method was simple and extensive. In summary, both of the models were accurate and efficient approaches for estimating refractive indices of ILs.
NASA Astrophysics Data System (ADS)
Meulstee, C.; Vanstokkom, H.
1985-01-01
The correlation between the biomass of sea grass and seaweed samples in a sidebranch of the Oosterschelde delta (Netherlands) and density ratios of this area on color infrared aerial photographs was investigated. As the Oosterschelde will become more divided from the North Sea after pier dam completion, an increase of macrophytes is expected. In an area where the weeds Ulva, Cheatomorpha, Entermorpha, Cladophora, Fucus vesuculosis, and the grasses Zostera noltii and Zostera marina are found, 53 biomass samples of a 0.054 sq m surface each were collected. The relation between covering degree and biomass was estimated. Using a transmission-densitometer adjusted to 3 to 1 mm, densities on 1:10,000 and 1:20,000 scale photographs were measured. A gage line was determined in a density-biomass diagram. The method is shown to be useful for an efficient, accurate biomass determination in the Oosterschelde.
NASA Astrophysics Data System (ADS)
Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian
2018-03-01
Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.
T7 lytic phage-displayed peptide libraries: construction and diversity characterization.
Krumpe, Lauren R H; Mori, Toshiyuki
2014-01-01
In this chapter, we describe the construction of T7 bacteriophage (phage)-displayed peptide libraries and the diversity analyses of random amino acid sequences obtained from the libraries. We used commercially available reagents, Novagen's T7Select system, to construct the libraries. Using a combination of biotinylated extension primer and streptavidin-coupled magnetic beads, we were able to prepare library DNA without applying gel purification, resulting in extremely high ligation efficiencies. Further, we describe the use of bioinformatics tools to characterize library diversity. Amino acid frequency and positional amino acid diversity and hydropathy are estimated using the REceptor LIgand Contacts website http://relic.bio.anl.gov. Peptide net charge analysis and peptide hydropathy analysis are conducted using the Genetics Computer Group Wisconsin Package computational tools. A comprehensive collection of the estimated number of recombinants and titers of T7 phage-displayed peptide libraries constructed in our lab is included.
Rosenberger, Amanda E.; Dunham, Jason B.
2005-01-01
Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
Hong-Geller, E; Valdez, Y E; Shou, Y; Yoshida, T M; Marrone, B L; Dunbar, J M
2010-04-01
We will validate sample collection methods for recovery of microbial evidence in the event of accidental or intentional release of biological agents into the environment. We evaluated the sample recovery efficiencies of two collection methods - swabs and wipes - for both nonvirulent and virulent strains of Bacillus anthracis and Yersinia pestis from four types of nonporous surfaces: two hydrophilic surfaces, stainless steel and glass, and two hydrophobic surfaces, vinyl and plastic. Sample recovery was quantified using real-time qPCR to assay for intact DNA signatures. We found no consistent difference in collection efficiency between swabs or wipes. Furthermore, collection efficiency was more surface-dependent for virulent strains than nonvirulent strains. For the two nonvirulent strains, collection efficiency was similar between all four surfaces, albeit B. anthracis Sterne exhibited higher levels of recovery compared to Y. pestis A1122. In contrast, recovery of B. anthracis Ames spores and Y. pestis CO92 from the hydrophilic glass or stainless steel surfaces was generally more efficient compared to collection from the hydrophobic vinyl and plastic surfaces. Our results suggest that surface hydrophobicity may play a role in the strength of pathogen adhesion. The surface-dependent collection efficiencies observed with the virulent strains may arise from strain-specific expression of capsular material or other cell surface receptors that alter cell adhesion to specific surfaces. These findings contribute to the validation of standard bioforensics procedures and emphasize the importance of specific strain and surface interactions in pathogen detection.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-01-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047
NASA Astrophysics Data System (ADS)
Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.
2017-03-01
Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.
Estimating nutrient uptake requirements for soybean using QUEFTS model in China
Yang, Fuqiang; Xu, Xinpeng; Wang, Wei; Ma, Jinchuan; Wei, Dan; He, Ping; Pampolino, Mirasol F.; Johnston, Adrian M.
2017-01-01
Estimating balanced nutrient requirements for soybean (Glycine max [L.] Merr) in China is essential for identifying optimal fertilizer application regimes to increase soybean yield and nutrient use efficiency. We collected datasets from field experiments in major soybean planting regions of China between 2001 and 2015 to assess the relationship between soybean seed yield and nutrient uptake, and to estimate nitrogen (N), phosphorus (P), and potassium (K) requirements for a target yield of soybean using the quantitative evaluation of the fertility of tropical soils (QUEFTS) model. The QUEFTS model predicted a linear–parabolic–plateau curve for the balanced nutrient uptake with a target yield increased from 3.0 to 6.0 t ha−1 and the linear part was continuing until the yield reached about 60–70% of the potential yield. To produce 1000 kg seed of soybean in China, 55.4 kg N, 7.9 kg P, and 20.1 kg K (N:P:K = 7:1:2.5) were required in the above-ground parts, and the corresponding internal efficiencies (IE, kg seed yield per kg nutrient uptake) were 18.1, 126.6, and 49.8 kg seed per kg N, P, and K, respectively. The QUEFTS model also simulated that a balanced N, P, and K removal by seed which were 48.3, 5.9, and 12.2 kg per 1000 kg seed, respectively, accounting for 87.1%, 74.1%, and 60.8% of the total above-ground parts, respectively. These results were conducive to make fertilizer recommendations that improve the seed yield of soybean and avoid excessive or deficient nutrient supplies. Field validation indicated that the QUEFTS model could be used to estimate nutrient requirements which help develop fertilizer recommendations for soybean. PMID:28498839
NASA Astrophysics Data System (ADS)
Yano, S.; Kondo, H.; Tawara, Y.; Yamada, T.; Mori, K.; Yoshida, A.; Tada, K.; Tsujimura, M.; Tokunaga, T.
2017-12-01
It is important to understand groundwater systems, including their recharge, flow, storage, discharge, and withdrawal, so that we can use groundwater resources efficiently and sustainably. To examine groundwater recharge, several methods have been discussed based on water balance estimation, in situ experiments, and hydrological tracers. However, few studies have developed a concrete framework for quantifying groundwater recharge rates in an undefined area. In this study, we established a robust method to quantitatively determine water cycles and estimate the groundwater recharge rate by combining the advantages of field surveys and model simulations. We replicated in situ hydrogeological observations and three-dimensional modeling in a mountainous basin area in Japan. We adopted a general-purpose terrestrial fluid-flow simulator (GETFLOWS) to develop a geological model and simulate the local water cycle. Local data relating to topology, geology, vegetation, land use, climate, and water use were collected from the existing literature and observations to assess the spatiotemporal variations of the water balance from 2011 to 2013. The characteristic structures of geology and soils, as found through field surveys, were parameterized for incorporation into the model. The simulated results were validated using observed groundwater levels and resulted in a Nash-Sutcliffe Model Efficiency Coefficient of 0.92. The results suggested that local groundwater flows across the watershed boundary and that the groundwater recharge rate, defined as the flux of water reaching the local unconfined groundwater table, has values similar to the level estimated in the `the lower soil layers on a long-term basis. This innovative method enables us to quantify the groundwater recharge rate and its spatiotemporal variability with high accuracy, which contributes to establishing a foundation for sustainable groundwater management.
Ellison, Christopher A.; Groten, Joel T.; Lorenz, David L.; Koller, Karl S.
2016-10-27
Consistent and reliable sediment data are needed by Federal, State, and local government agencies responsible for monitoring water quality, planning river restoration, quantifying sediment budgets, and evaluating the effectiveness of sediment reduction strategies. Heightened concerns about excessive sediment in rivers and the challenge to reduce costs and eliminate data gaps has guided Federal and State interests in pursuing alternative methods for measuring suspended and bedload sediment. Simple and dependable data collection and estimation techniques are needed to generate hydraulic and water-quality information for areas where data are unavailable or difficult to collect.The U.S. Geological Survey, in cooperation with the Minnesota Pollution Control Agency and the Minnesota Department of Natural Resources, completed a study to evaluate the use of dimensionless sediment rating curves (DSRCs) to accurately predict suspended-sediment concentrations (SSCs), bedload, and annual sediment loads for selected rivers and streams in Minnesota based on data collected during 2007 through 2013. This study included the application of DSRC models developed for a small group of streams located in the San Juan River Basin near Pagosa Springs in southwestern Colorado to rivers in Minnesota. Regionally based DSRC models for Minnesota also were developed and compared to DSRC models from Pagosa Springs, Colorado, to evaluate which model provided more accurate predictions of SSCs and bedload in Minnesota.Multiple measures of goodness-of-fit were developed to assess the effectiveness of DSRC models in predicting SSC and bedload for rivers in Minnesota. More than 600 dimensionless ratio values of SSC, bedload, and streamflow were evaluated and delineated according to Pfankuch stream stability categories of “good/fair” and “poor” to develop four Minnesota-based DSRC models. The basis for Pagosa Springs and Minnesota DSRC model effectiveness was founded on measures of goodness-of-fit that included proximity of the model(s) fitted line to the 95-percent confidence intervals of the site-specific model, Nash-Sutcliffe Efficiency values, model biases, and deviation of annual sediment loads from each model to the annual sediment loads calculated from measured data.Composite plots comparing Pagosa Springs DSRCs, Minnesota DSRCs, site-specific regression models, and measured data indicated that regionally developed DSRCs (Minnesota DSRC models) more closely approximated measured data for nearly every site. Pagosa Springs DSRC models had markedly larger exponents (slopes) when compared to the Minnesota DSRC models and the site-specific regression models, and over-represent SSC and bedload at streamflows exceeding bankfull. The Nash-Sutcliffe Efficiency values for the Minnesota DSRC model for suspended-sediment concentrations closely matched Nash-Sutcliffe Efficiency values of the site-specific regression models for 12 out of 16 sites. Nash-Sutcliffe Efficiency values associated with Minnesota DSRCs were greater than those associated with Pagosa Springs DSRCs for every site except the Whitewater River near Beaver, Minnesota site. Pagosa Springs DSRC models were less accurate than the mean of the measured data at predicting SSC values for one-half of the sites for good/fair stability sites and one-half of the sites for poor stability sites. Relative model biases were calculated and determined to be substantial (greater than 5 percent) for Pagosa Springs and Minnesota models, with Minnesota models having a lower mean model bias. For predicted annual suspended-sediment loads (SSL), the Minnesota DSRC models for good/fair and poor stream stability sites more closely approximated the annual SSLs calculated from the measured data as compared to the Pagosa Springs DSRC model.Results of data analyses indicate that DSRC models developed using data collected in Minnesota were more effective at compensating for differences in individual stream characteristics across a variety of basin sizes and flow regimes than DSRC models developed using data collected for Pagosa Springs, Colorado. Minnesota DSRC models retained a substantial portion of the unique sediment signatures for most rivers, although deviations were observed for streams with limited sediment supply and for rivers in southeastern Minnesota, which had markedly larger regression exponents. Compared to Pagosa Springs DSRC models, Minnesota DSRC models had regression slopes that more closely matched the slopes of site-specific regression models, had greater Nash-Sutcliffe Efficiency values, had lower model biases, and approximated measured annual sediment loads more closely. The results presented in this report indicate that regionally based DSRCs can be used to estimate reasonably accurate values of SSC and bedload.Practitioners are cautioned that DSRC reliability is dependent on representative measures of bankfull streamflow, SSC, and bedload. It is, therefore, important that samples of SSC and bedload, which will be used for estimating SSC and bedload at the bankfull streamflow, are collected over a range of conditions that includes the ascending and descending limbs of the event hydrograph. The use of DSRC models may have substantial limitations for certain conditions. For example, DSRC models should not be used to predict SSC and sediment loads for extreme streamflows, such as those that exceed twice the bankfull streamflow value because this constitutes conditions beyond the realm of current (2016) empirical modeling capability. Also, if relations between SSC and streamflow and between bedload and streamflow are not statistically significant, DSRC models should not be used to predict SSC or bedload, as this could result in large errors. For streams that do not violate these conditions, DSRC estimates of SSC and bedload can be used for stream restoration planning and design, and for estimating annual sediment loads for streams where little or no sediment data are available.
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
76 FR 47566 - Agency Information Collection Extension; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Agency Information Collection Extension; Correction AGENCY: Office of Energy Efficiency and Renewable Energy, U.S. Department of... INFORMATION CONTACT: Benjamin Goldstein, Buy American Coordinator, Office of Energy Efficiency and Renewable...
SU-C-201-03: Ionization Chamber Collection Efficiency in Pulsed Radiation Fields of High Pulse Dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotz, M; Karsch, L; Pawelke, J
Purpose: To investigate the reduction of collection efficiency of ionization chambers (IC) by volume recombination and its correction in pulsed fields of very high pulse dose. Methods: Measurements of the collection efficiency of a plane-parallel advanced Markus IC (PTW 34045, 1mm electrode spacing, 300V nominal voltage) were obtained for collection voltages of 100V and 300V by irradiation with a pulsed electron beam (20MeV) of varied pulse dose up to approximately 600mGy (0.8nC liberated charge). A reference measurement was performed with a Faraday cup behind the chamber. It was calibrated for the liberated charge in the IC by a linear fitmore » of IC measurement to reference measurement at low pulse doses. The results were compared to the commonly used two voltage approximation (TVA) and to established theories for volume recombination, with and without considering a fraction of free electrons. In addition, an equation system describing the charge transport and reactions in the chamber was solved numerically. Results: At 100V collection voltage and moderate pulse doses the established theories accurately predict the observed collection efficiency, but at extreme pulse doses a fraction of free electrons needs to be considered. At 300V the observed collection efficiency deviates distinctly from that predicted by any of the established theories, even at low pulse doses. However, the numeric solution of the equation system is able to reproduce the measured collection efficiency across the entire dose range of both voltages with a single set of parameters. Conclusion: At high electric fields (3000V/cm here) the existing theoretical descriptions of collection efficiency, including the TVA, are inadequate to predict pulse dose dependency. Even at low pulse doses they might underestimate collection efficiency. The presented, more accurate numeric solution, which considers additional effects like electric shielding by the charges, might provide a valuable tool for future investigations. This project was funded by the German ministry of research and education (BMBF) under grant number: 03Z1N511 and by the state of Saxony under grant number: B 209.« less
[Strategies and development of quality assurance and control in the ELSA-Brasil].
Schmidt, Maria Inês; Griep, Rosane Härter; Passos, Valéria Maria; Luft, Vivian Cristine; Goulart, Alessandra Carvalho; Menezes, Greice Maria de Souza; Molina, Maria del Carmen Bisi; Vigo, Alvaro; Nunes, Maria Angélica
2013-06-01
The ELSA-Brasil (Estudo Longitudinal de Saúde do Adulto - Brazilian Longitudinal Study for Adult Health) is a cohort study composed of 15,105 adults followed up in order to assess the development of chronic diseases, especially diabetes and cardiovascular disease. Its size, multicenter nature and the diversity of measurements required effective and efficient mechanisms of quality assurance and control. The main quality assurance activities (those developed before data collection) were: careful selection of research instruments, centralized training and certification, pretesting and pilot studies, and preparation of operation manuals for the procedures. Quality control activities (developed during data collection and processing) were performed more intensively at the beginning, when routines had not been established yet. The main quality control activities were: periodic observation of technicians, test-retest studies, data monitoring, network of supervisors, and cross visits. Data that estimate the reliability of the obtained information attest that the quality goals have been achieved.
Numerical simulations of regolith sampling processes
NASA Astrophysics Data System (ADS)
Schäfer, Christoph M.; Scherrer, Samuel; Buchwald, Robert; Maindl, Thomas I.; Speith, Roland; Kley, Wilhelm
2017-07-01
We present recent improvements in the simulation of regolith sampling processes in microgravity using the numerical particle method smooth particle hydrodynamics (SPH). We use an elastic-plastic soil constitutive model for large deformation and failure flows for dynamical behaviour of regolith. In the context of projected small body (asteroid or small moons) sample return missions, we investigate the efficiency and feasibility of a particular material sampling method: Brushes sweep material from the asteroid's surface into a collecting tray. We analyze the influence of different material parameters of regolith such as cohesion and angle of internal friction on the sampling rate. Furthermore, we study the sampling process in two environments by varying the surface gravity (Earth's and Phobos') and we apply different rotation rates for the brushes. We find good agreement of our sampling simulations on Earth with experiments and provide estimations for the influence of the material properties on the collecting rate.
Koroiva, Ricardo; Pepinelli, Mateus; Rodrigues, Marciel Elio; Roque, Fabio de Oliveira; Lorenz-Lemke, Aline Pedroso; Kvist, Sebastian
2017-01-01
We present a DNA barcoding study of Neotropical odonates from the Upper Plata basin, Brazil. A total of 38 species were collected in a transition region of "Cerrado" and Atlantic Forest, both regarded as biological hotspots, and 130 cytochrome c oxidase subunit I (COI) barcodes were generated for the collected specimens. The distinct gap between intraspecific (0-2%) and interspecific variation (15% and above) in COI, and resulting separation of Barcode Index Numbers (BIN), allowed for successful identification of specimens in 94% of cases. The 6% fail rate was due to a shared BIN between two separate nominal species. DNA barcoding, based on COI, thus seems to be a reliable and efficient tool for identifying Neotropical odonate specimens down to the species level. These results underscore the utility of DNA barcoding to aid specimen identification in diverse biological hotspots, areas that require urgent action regarding taxonomic surveys and biodiversity conservation.
Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Marinica, Mihai-Cosmin
2018-03-01
The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.
Analysis of thematic mapper simulator data collected over eastern North Dakota
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1982-01-01
The results of the analysis of aircraft-acquired thematic mapper simulator (TMS) data, collected to investigate the utility of thematic mapper data in crop area and land cover estimates, are discussed. Results of the analysis indicate that the seven-channel TMS data are capable of delineating the 13 crop types included in the study to an overall pixel classification accuracy of 80.97% correct, with relative efficiencies for four crop types examined between 1.62 and 26.61. Both supervised and unsupervised spectral signature development techniques were evaluated. The unsupervised methods proved to be inferior (based on analysis of variance) for the majority of crop types considered. Given the ground truth data set used for spectral signature development as well as evaluation of performance, it is possible to demonstrate which signature development technique would produce the highest percent correct classification for each crop type.
Packaging and distributing ecological data from multisite studies
NASA Technical Reports Server (NTRS)
Olson, R. J.; Voorhees, L. D.; Field, J. M.; Gentry, M. J.
1996-01-01
Studies of global change and other regional issues depend on ecological data collected at multiple study areas or sites. An information system model is proposed for compiling diverse data from dispersed sources so that the data are consistent, complete, and readily available. The model includes investigators who collect and analyze field measurements, science teams that synthesize data, a project information system that collates data, a data archive center that distributes data to secondary users, and a master data directory that provides broader searching opportunities. Special attention to format consistency is required, such as units of measure, spatial coordinates, dates, and notation for missing values. Often data may need to be enhanced by estimating missing values, aggregating to common temporal units, or adding other related data such as climatic and soils data. Full documentation, an efficient data distribution mechanism, and an equitable way to acknowledge the original source of data are also required.
NASA Astrophysics Data System (ADS)
Hoffman, Kenneth J.
1995-10-01
Few information systems create a standardized clinical patient record in which there are discrete and concise observations of patient problems and their resolution. Clinical notes usually are narratives which don't support an aggregate and systematic outcome analysis. Many programs collect information on diagnosis and coded procedures but are not focused on patient problems. Integrated definition (IDEF) methodology has been accepted by the Department of Defense as part of the Corporate Information Management Initiative and serves as the foundation that establishes a need for automation. We used IDEF modeling to describe present and idealized patient care activities. A logical IDEF data model was created to support those activities. The modeling process allows for accurate cost estimates based upon performed activities, efficient collection of relevant information, and outputs which allow real- time assessments of process and outcomes. This model forms the foundation for a prototype automated clinical information system (ACIS).
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol; Shah, Nihar; Abhyankar, Nikit
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robustmore » over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment« less
Novignon, Jacob; Nonvignon, Justice
2017-06-12
Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private and public facilities. There is need for primary health facility managers to improve productivity via effective and efficient resource use. Efforts to improve efficiency should focus on training health workers and improving facility environment alongside effective monitoring and evaluation exercises.
Solid State Lasers from an Efficiency Perspective
NASA Technical Reports Server (NTRS)
Barnes, Norman P.
2007-01-01
Solid state lasers have remained a vibrant area of research because several major innovations expanded their capability. Major innovations are presented with emphasis focused on the laser efficiency. A product of efficiencies approach is developed and applied to describe laser performance. Efficiency factors are presented in closed form where practical and energy transfer effects are included where needed. In turn, efficiency factors are used to estimate threshold and slope efficiency, allowing a facile estimate of performance. Spectroscopic, thermal, and mechanical data are provided for common solid state laser materials.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-15
... of Veteran Enrollees (Quality and Efficiency of VA Health Care)) Activity; Comment Request AGENCY... of Veteran Enrollees (Quality and Efficiency of VA Health Care), VA Form 10-21088. OMB Control Number... will be used to collect data that is necessary to promote quality and efficient delivery of health care...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... of Veteran Enrollees (Quality and Efficiency of VA Health Care)) Activities Under OMB Review AGENCY... of Veteran Enrollees (Quality and Efficiency of VA Health Care), VA Form 10-21088. OMB Control Number... will be used to collect data that is necessary to promote quality and efficient delivery of health care...
Jehu-Appiah, Caroline; Sekidde, Serufusa; Adjuik, Martin; Akazili, James; Almeida, Selassi D; Nyonator, Frank; Baltussen, Rob; Asbu, Eyob Zere; Kirigia, Joses Muthuri
2014-04-08
In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%.Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals.Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%).From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. It would be prudent for policy-makers to examine the least efficient hospitals to correct widespread inefficiency. This would include reconsidering the number of hospitals and their distribution, improving efficiency and reducing duplication by closing or scaling down hospitals with efficiency scores below a certain threshold. For private hospitals with inefficiency related to large size, there is a need to break down such hospitals into manageable sizes.
Ownership and technical efficiency of hospitals: evidence from Ghana using data envelopment analysis
2014-01-01
Background In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. Methods In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. Results In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%. Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals. Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%). From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. Conclusions It would be prudent for policy-makers to examine the least efficient hospitals to correct widespread inefficiency. This would include reconsidering the number of hospitals and their distribution, improving efficiency and reducing duplication by closing or scaling down hospitals with efficiency scores below a certain threshold. For private hospitals with inefficiency related to large size, there is a need to break down such hospitals into manageable sizes. PMID:24708886
A simulation study of turbofan engine deterioration estimation using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Lambert, Heather H.
1991-01-01
Deterioration of engine components may cause off-normal engine operation. The result is an unecessary loss of performance, because the fixed schedules are designed to accommodate a wide range of engine health. These fixed control schedules may not be optimal for a deteriorated engine. This problem may be solved by including a measure of deterioration in determining the control variables. These engine deterioration parameters usually cannot be measured directly but can be estimated. A Kalman filter design is presented for estimating two performance parameters that account for engine deterioration: high and low pressure turbine delta efficiencies. The delta efficiency parameters model variations of the high and low pressure turbine efficiencies from nominal values. The filter has a design condition of Mach 0.90, 30,000 ft altitude, and 47 deg power level angle (PLA). It was evaluated using a nonlinear simulation of the F100 engine model derivative (EMD) engine, at the design Mach number and altitude over a PLA range of 43 to 55 deg. It was found that known high pressure turbine delta efficiencies of -2.5 percent and low pressure turbine delta efficiencies of -1.0 percent can be estimated with an accuracy of + or - 0.25 percent efficiency with a Kalman filter. If both the high and low pressure turbine are deteriorated, the delta efficiencies of -2.5 percent to both turbines can be estimated with the same accuracy.
Modeling qRT-PCR dynamics with application to cancer biomarker quantification.
Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A
2017-01-01
Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.
2015-01-22
applications in fast single photon sources, quantum repeater circuitry, and high fidelity remote entanglement of atoms for quantum information protocols. We...fluorescence for motion/force sensors through Doppler velocimetry; and for the efficient collection of single photons from trapped ions for...Doppler velocimetry; and for the efficient collection of single photons from trapped ions for applications in fast single photon sources, quantum
Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012.
Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed
2015-01-01
Assessment of hospitals' performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved.
Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012
Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed
2015-01-01
Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657
Design of A Cyclone Separator Using Approximation Method
NASA Astrophysics Data System (ADS)
Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee
2017-12-01
A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.
76 FR 35199 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-16
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Proposed Agency Information Collection AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice and.... Issued in Washington, DC, on June 9, 2011. Henry Kelly, Acting Assistant Secretary, Energy Efficiency and...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estima...
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Utilizing a Tower Based System for Optical Sensing of Ecosystem Carbon Fluxes
NASA Astrophysics Data System (ADS)
Huemmrich, K. F.; Corp, L. A.; Middleton, E.; Campbell, P. K. E.; Landis, D.; Kustas, W. P.
2015-12-01
Optical sampling of spectral reflectance and solar induced fluorescence provide information on the physiological status of vegetation that can be used to infer stress responses and estimates of production. Multiple repeated observations are required to observe the effects of changing environmental conditions on vegetation. This study examines the use of optical signals to determine inputs to a light use efficiency (LUE) model describing productivity of a cornfield where repeated observations of carbon flux, spectral reflectance and fluorescence were collected. Data were collected at the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) fields (39.03°N, 76.85°W) at USDA Beltsville Agricultural Research Center. Agricultural Research Service researchers measured CO2 fluxes using eddy covariance methods throughout the growing season. Optical measurements were made from the nearby tower supporting the NASA FUSION sensors. The sensor system consists of two dual channel, upward and downward looking, spectrometers used to simultaneously collect high spectral resolution measurements of reflected and fluoresced light from vegetation canopies at multiple view angles. Estimates of chlorophyll fluorescence, combined with measures of vegetation pigment content and the Photosynthetic Reflectance Index (PRI) derived from the spectral reflectance are compared with CO2 fluxes over diurnal periods for multiple days. The relationships among the different optical measurements indicate that they are providing different types of information on the vegetation and that combinations of these measurements provide improved retrievals of CO2 fluxes than any index alone
Uav and GIS Based Tool for Collection and Propagation of Seeds Material - First Results
NASA Astrophysics Data System (ADS)
Stereńczak, K.; Mroczek, P.; Jastrzębowski, S.; Krok, G.; Lisańczuk, M.; Klisz, M.; Kantorowicz, W.
2016-06-01
Seed management carried out by The State Forests National Forest Holding is an integral part of rational forest management. Seed collection takes place mainly from stands belonging to first category of forest reproductive material, which is the largest seed base in Poland. In smaller amount, seeds are collected in selective objects of highest forest reproductive material category (selected seed stands, seed orchards). The previous estimation methods of seed crop were based on visual assessment of cones in the stands for their harvest. Following the rules of FRM transfer is additional difficulty of rational seed management which limits the possibility of the use of planting material in Poland. Statements concerning forecast of seed crop and monitoring of seed quality is based on annual reports from the State Forest Service. Forest Research Institute is responsible for preparing and publishing above-mentioned statements. A small extent of its automatization and optimization is a large disadvantage of this procedure. In order to make this process more effective web-based GIS application was designed. Its main performance will give a possibility to upload present-day information on seed efficiency, their spatial pattern and availability. Currently this system is under preparation. As a result, the project team will get a possibility to increase participation of seed material collected from selected seed base and to share good practices on this issue in more efficient way. In the future this will make it possible to obtain greater genetic gain of selection strategy. Additionally, first results presented in literature showed possible use of unmanned aerial system/vehicle (UAS/V) for supporting of seed crop forecast procedure.
Efficiency in the Community College Sector: Stochastic Frontier Analysis
ERIC Educational Resources Information Center
Agasisti, Tommaso; Belfield, Clive
2017-01-01
This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... Record Keepers: 100. Estimated Time per Record Keeper: 60 minutes. Estimated Total Burden Hours: 100... or record keepers from the collection of information (total capital/ startup costs and operations and.... Estimated Number of Responses per Respondent: 1. Estimated Number of Total Responses: 100. Estimated Time...
NASA Astrophysics Data System (ADS)
Kanniah, K. D.; Tan, K. P.; Cracknell, A. P.
2014-10-01
The amount of carbon sequestration by vegetation can be estimated using vegetation productivity. At present, there is a knowledge gap in oil palm net primary productivity (NPP) at a regional scale. Therefore, in this study NPP of oil palm trees in Peninsular Malaysia was estimated using remote sensing based light use efficiency (LUE) model with inputs from local meteorological data, upscaled leaf area index/fractional photosynthetically active radiation (LAI/fPAR) derived using UK-DMC 2 satellite data and a constant maximum LUE value from the literature. NPP values estimated from the model was then compared and validated with NPP estimated using allometric equations developed by Corley and Tinker (2003), Henson (2003) and Syahrinudin (2005) with diameter at breast height, age and the height of the oil palm trees collected from three estates in Peninsular Malaysia. Results of this study show that oil palm NPP derived using a light use efficiency model increases with respect to the age of oil palm trees, and it stabilises after ten years old. The mean value of oil palm NPP at 118 plots as derived using the LUE model is 968.72 g C m-2 year-1 and this is 188% - 273% higher than the NPP derived from the allometric equations. The estimated oil palm NPP of young oil palm trees is lower compared to mature oil palm trees (<10 years old), as young oil palm trees contribute to lower oil palm LAI and therefore fPAR, which is an important variable in the LUE model. In contrast, it is noted that oil palm NPP decreases with respect to the age of oil palm trees as estimated using the allomeric equations. It was found in this study that LUE models could not capture NPP variation of oil palm trees if LAI/fPAR is used. On the other hand, tree height and DBH are found to be important variables that can capture changes in oil palm NPP as a function of age.
Madenijian, C.P.; David, S.R.; Krabbenhoft, D.P.
2012-01-01
Based on a laboratory experiment, we estimated the net trophic transfer efficiency of methylmercury to lake trout Salvelinus namaycush from its prey to be equal to 76.6 %. Under the assumption that gross trophic transfer efficiency of methylmercury to lake trout from its prey was equal to 80 %, we estimated that the rate at which lake trout eliminated methylmercury was 0.000244 day−1. Our laboratory estimate of methylmercury elimination rate was 5.5 times lower than the value predicted by a published regression equation developed from estimates of methylmercury elimination rates for fish available from the literature. Thus, our results, in conjunction with other recent findings, suggested that methylmercury elimination rates for fish have been overestimated in previous studies. In addition, based on our laboratory experiment, we estimated that the net trophic transfer efficiency of inorganic mercury to lake trout from its prey was 63.5 %. The lower net trophic transfer efficiency for inorganic mercury compared with that for methylmercury was partly attributable to the greater elimination rate for inorganic mercury. We also found that the efficiency with which lake trout retained either methylmercury or inorganic mercury from their food did not appear to be significantly affected by the degree of their swimming activity.
NASA Astrophysics Data System (ADS)
Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.
2015-12-01
A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.
Flood area and damage estimation in Zhejiang, China.
Liu, Renyi; Liu, Nan
2002-09-01
A GIS-based method to estimate flood area and damage is presented in this paper, which is oriented to developing countries like China, where labor is readily available for GIS data collecting, and tools such as, HEC-GeoRAS might not be readily available. At present local authorities in developing countries are often not predisposed to pay for commercial GIS platforms. To calculate flood area, two cases, non-source flood and source flood, are distinguished and a seed-spread algorithm suitable for source-flooding is described. The flood damage estimation is calculated in raster format by overlaying the flood area range with thematic maps and relating this to other socioeconomic data. Several measures used to improve the geometric accuracy and computing efficiency are presented. The management issues related to the application of this method, including the cost-effectiveness of approximate method in practice and supplementing two technical lines (self-programming and adopting commercial GIS software) to each other, are also discussed. The applications show that this approach has practical significance to flood fighting and control in developing countries like China.
Application of adaptive cluster sampling to low-density populations of freshwater mussels
Smith, D.R.; Villella, R.F.; Lemarie, D.P.
2003-01-01
Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.
Gerlach, K; Pries, M; Tholen, E; Schmithausen, A J; Büscher, W; Südekum, K-H
2018-01-08
The objective of this study was to evaluate the effect of supplemented condensed tannins (CT) from the bark of the Black Wattle tree (Acacia mearnsii) on production variables and N use efficiency in high yielding dairy cows. A feeding trial with 96 lactating German Holstein cows was conducted for a total of 169 days, divided into four periods. The animals were allotted to two groups (control (CON) and experimental (EXP) group) according to milk yield in previous lactation, days in milk (98), number of lactations and BW. The trial started and finished with a period (period 1 and 4) where both groups received the same ration (total-mixed ration based on grass and maize silage, ensiled sugar beet pulp, lucerne hay, mineral premix and concentrate, calculated for 37 kg energy-corrected milk). In between, the ration of EXP cows was supplemented with 1% (CT1, period 2) and 3% of dry matter (DM) (CT3, period 3) of a commercial A. mearnsii extract (containing 0.203 g CT/g DM) which was mixed into the concentrate. In period 3, samples of urine and faeces were collected from 10 cows of each group and analyzed to estimate N excretion. Except for a tendency for a reduced milk urea concentration with CT1, there was no difference between groups in period 2 (CON v. CT1; P>0.05). The CT3 significantly reduced (P<0.05) milk protein yield, the apparent N efficiency (kg milk N/k feed N) and milk urea concentration; but total milk yield and energy-corrected milk yield were not affected by treatment. Furthermore, as estimated from 10 cows per group and using urinary K as a marker to estimate the daily amount of urine voided, CT3 caused a minor shift of N compounds from urine to faeces, as urea-N in urine was reduced, whereas the N concentration in faeces increased. As an improvement in productivity was not achieved and N use efficiency was decreased by adding the CT product it can be concluded that under current circumstances the use in high yielding dairy cows is not advantageous.
77 FR 8852 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Proposed Agency Information... proposed collection of information for a National Evaluation of the Energy Efficiency and Conservation... proposed collection of information is necessary for the proper performance of the functions of the agency...
75 FR 51986 - Agency Information Collection Extension; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
... questionnaires to collect information on the respondents' knowledge of solar energy and energy efficiency and on installations of solar-energy and energy-efficiency equipment with which the respondents have been personally... DEPARTMENT OF ENERGY Agency Information Collection Extension; Correction AGENCY: U.S. Department...
76 FR 45786 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket Number EERE-2011-BT-NOA-0039] Proposed Agency Information Collection AGENCY: Office of Energy Efficiency and Renewable... sent to Mr. Alan Schroeder, U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy...
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nihar; Abhyankar, Nikit; Park, Won Young
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. We assess several efficiency levels, two of which are summarized below in the report.more » The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one star level) should be evaluated rigorously considering significant benefits to consumers, energy security and environment.« less
Labrique, Alain; Blynn, Emily; Ahmed, Saifuddin; Gibson, Dustin; Pariyo, George; Hyder, Adnan A
2017-05-05
In low- and middle-income countries (LMICs), historically, household surveys have been carried out by face-to-face interviews to collect survey data related to risk factors for noncommunicable diseases. The proliferation of mobile phone ownership and the access it provides in these countries offers a new opportunity to remotely conduct surveys with increased efficiency and reduced cost. However, the near-ubiquitous ownership of phones, high population mobility, and low cost require a re-examination of statistical recommendations for mobile phone surveys (MPS), especially when surveys are automated. As with landline surveys, random digit dialing remains the most appropriate approach to develop an ideal survey-sampling frame. Once the survey is complete, poststratification weights are generally applied to reduce estimate bias and to adjust for selectivity due to mobile ownership. Since weights increase design effects and reduce sampling efficiency, we introduce the concept of automated active strata monitoring to improve representativeness of the sample distribution to that of the source population. Although some statistical challenges remain, MPS represent a promising emerging means for population-level data collection in LMICs. ©Alain Labrique, Emily Blynn, Saifuddin Ahmed, Dustin Gibson, George Pariyo, Adnan A Hyder. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 05.05.2017.
A relativistic neutron fireball from a supernova explosion as a possible source of chiral influence.
Gusev, G A; Saito, T; Tsarev, V A; Uryson, A V
2007-06-01
We elaborate on a previously proposed idea that polarized electrons produced from neutrons, released in a supernova (SN) explosion, can cause chiral dissymmetry of molecules in interstellar gas-dust clouds. A specific physical mechanism of a relativistic neutron fireball with Lorentz factor of the order of 100 is assumed for propelling a great number of free neutrons outside the dense SN shell. A relativistic chiral electron-proton plasma, produced from neutron decays, is slowed down owing to collective effects in the interstellar plasma. As collective effects do not involve the particle spin, the electrons can carry their helicities to the cloud. The estimates show high chiral efficiency of such electrons. In addition to this mechanism, production of circularly polarized ultraviolet photons through polarized-electron bremsstrahlung at an early stage of the fireball evolution is considered. It is shown that these photons can escape from the fireball plasma. However, for an average density of neutrals in the interstellar medium of the order of 0.2 cm(-3) and at distances of the order of 10 pc from the SN, these photons will be absorbed with a factor of about 10(-7) due to the photoeffect. In this case, their chiral efficiency will be about five orders of magnitude less than that for polarized electrons.
NASA Astrophysics Data System (ADS)
Moghaddam, M.; Silva, A. R. D.; Akbar, R.; Clewley, D.
2015-12-01
The Soil moisture Sensing Controller And oPtimal Estimator (SoilSCAPE) wireless sensor network has been developed to support Calibration and Validation activities (Cal/Val) for large scale soil moisture remote sensing missions (SMAP and AirMOSS). The technology developed here also readily supports small scale hydrological studies by providing sub-kilometer widespread soil moisture observations. An extensive collection of semi-sparse sensor clusters deployed throughout north-central California and southern Arizona provide near real time soil moisture measurements. Such a wireless network architecture, compared to conventional single points measurement profiles, allows for significant and expanded soil moisture sampling. The work presented here aims at discussing and highlighting novel and new technology developments which increase in situ soil moisture measurements' accuracy, reliability, and robustness with reduced data delivery latency. High efficiency and low maintenance custom hardware have been developed and in-field performance has been demonstrated for a period of three years. The SoilSCAPE technology incorporates (a) intelligent sensing to prevent erroneous measurement reporting, (b) on-board short term memory for data redundancy, (c) adaptive scheduling and sampling capabilities to enhance energy efficiency. A rapid streamlined data delivery architecture openly provides distribution of in situ measurements to SMAP and AirMOSS cal/val activities and other interested parties.
NASA Astrophysics Data System (ADS)
Gardiner, John Corby
The electric power industry market structure has changed over the last twenty years since the passage of the Public Utility Regulatory Policies Act (PURPA). These changes include the entry by unregulated generator plants and, more recently, the deregulation of entry and price in the retail generation market. Such changes have introduced and expanded competitive forces on the incumbent electric power plants. Proponents of this deregulation argued that the enhanced competition would lead to a more efficient allocation of resources. Previous studies of power plant technical and allocative efficiency have failed to measure technical and allocative efficiency at the plant level. In contrast, this study uses panel data on 35 power plants over 59 years to estimate technical and allocative efficiency of each plant. By using a flexible functional form, which is not constrained by the assumption that regulation is constant over the 59 years sampled, the estimation procedure accounts for changes in both state and national regulatory/energy policies that may have occurred over the sample period. The empirical evidence presented shows that most of the power plants examined have operated more efficiently since the passage of PURPA and the resultant increase of competitive forces. Chapter 2 extends the model used in Chapter 1 and clarifies some issues in the efficiency literature by addressing the case where homogeneity does not hold. A more general model is developed for estimating both input and output inefficiency simultaneously. This approach reveals more information about firm inefficiency than the single estimation approach that has previously been used in the literature. Using the more general model, estimates are provided on the type of inefficiency that occurs as well as the cost of inefficiency by type of inefficiency. In previous studies, the ranking of firms by inefficiency has been difficult because of the cardinal and ordinal differences between different types of inefficiency estimates. However, using the general approach, this study illustrates that plants can be ranked by overall efficiency.
Comparison of Wipe Materials and Wetting Agents for Pesticide Residue Collection from Hard Surfaces
Deziel, Nicole C.; Viet, Susan M.; Rogers, John W.; Camann, David E.; Marker, David A.; Heikkinen, Maire S. A.; Yau, Alice Y.; Stout, Daniel M.; Dellarco, Michael
2011-01-01
Different wipe materials and wetting agents have been used to collect pesticide residues from surfaces, but little is known about their comparability. To inform the selection of a wipe for the National Children’s Study, the analytical feasibility, collection efficiency, and precision of Twillwipes wetted with isopropanol (TI), Ghost Wipes (GW), and Twillwipes wetted with water (TW), and were evaluated. Wipe samples were collected from stainless steel surfaces spiked with high and low concentrations of 27 insecticides, including organochlorines, organophosphates, and pyrethroids. Samples were analyzed by GC/MS/SIM. No analytical interferences were observed for any of the wipes. The mean percent collection efficiencies across all pesticides for the TI, GW, and TW were 69.3%, 31.1%, and 10.3% at the high concentration, respectively, and 55.6%, 22.5%, and 6.9% at the low concentration, respectively. The collection efficiencies of the TI were significantly greater than that of GW or TW (p<0.0001). Collection efficiency also differed significantly by pesticide (p<0.0001) and spike concentration (p<0.0001). The pooled coefficients of variation (CVs) of the collection efficiencies for the TI, GW, and TW at high concentration were 0.08, 0.17, and 0.24, respectively. The pooled CV of the collection efficiencies for the TI, GW, and TW at low concentration were 0.15, 0.19, and 0.36, respectively. The TI had significantly lower CVs than either of the other two wipes (p=0.0008). Though the TI was superior in terms of both accuracy and precision, it requires multiple preparation steps, which could lead to operational challenges in a large-scale study. PMID:21816452
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Comparison of daily and weekly precipitation sampling efficiencies using automatic collectors
Schroder, L.J.; Linthurst, R.A.; Ellson, J.E.; Vozzo, S.F.
1985-01-01
Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley Farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers (AEROCHEM METRICS MODEL 301) were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiences of precipitation are affected by small distances between the Universal (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances. Average collection efficiencies were 97% for weekly sampling periods compared with the rain gage. Collection efficiencies were examined by seasons and precipitation volume. Neither factor significantly affected collection efficiency. No evaporation loss was found by comparing daily sampling to weekly sampling at the collection site, which was classified as a subtropical climate. Correlation coefficients for pH and specific conductance of daily samples and weekly samples ranged from 0.83 to 0.99.Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiencies of precipitation are affected by small distances between the University (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances.
Menichella, G; Lai, M; Pierelli, L; Vittori, M; Serafini, R; Ciarli, M; Foddai, M L; Salerno, G; Sica, S; Scambia, G; Leone, G; Bizzi, B
1997-01-01
Reconstitution of hematopoiesis by means of peripheral blood stem cells is a valid alternative to autologous bone marrow transplantation. The aim of this investigation was to increase the efficiency of collection of circulating blood progenitor cells and to obtain a purer product for transplant. We carried out leukapheresis procedures with the Fresenius AS 104 blood cell separator, using two different protocols, the previously used PBSC-LYM and a new mononuclear cell collection program. Both programs were highly effective in collecting mononuclear cells (MNC) and CD34+ cells. Some differences were found, especially regarding MNC yield and efficiencies. There are remarkable differences in the efficiency of collection of CD34+ cells (62.38% with the new program as opposed to 31.69% with the older one). Linear regression analysis showed a negative correlation between blood volume processed and MNC efficiency only for the PBSC-LYM program. Differences were also observed in the degree of inverse correlation existing in both programs between patients' white blood cell precount and MNC collection efficiency. The inverse correlation was stronger for the PBSC-LYM program. Seven patients with solid tumors and hematologic malignancies received high dose chemotherapy and were subsequently transplanted with peripheral blood stem cells collected using the new protocol. All patients obtained a complete and stable engraftment with the reinfusion product collected with one or two leukapheresis procedures. High efficiencies and yields were observed in the new protocol for MNC and CD34+ cells. These were able to effect rapid and complete bone marrow recovery after myeloablative chemotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan
Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less
McCullough, Deborah G; Siegert, Nathan W
2007-10-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest native to Asia, was identified in June 2002 as the cause of widespread ash (Fraxinus spp.), mortality in southeastern Michigan and Windsor, Ontario, Canada. Localized populations of A. planipennis have since been found across lower Michigan and in areas of Ohio, Indiana, Illinois, Maryland, and Ontario. Officials working to contain A. planipennis and managers of forestlands near A. planipennis infestations must be able to compare alternative strategies to allocate limited funds efficiently and effectively. Empirical data from a total of 148 green ash, Fraxinus pennsylvanica Marsh., and white ash, Fraxinus americana L., trees were used to develop models to estimate surface area of the trunk and branches by using tree diameter at breast height (dbh). Data collected from 71 additional F. pennsylvanica and F. americana trees killed by A. planipennis showed that on average, 88.9 +/- 4.6 beetles developed and emerged per m2 of surface area. Models were applied to ash inventory data collected at two outlier sites to estimate potential production of A. planipennis beetles at each site. Large trees of merchantable size (dbh > or = 26 cm) accounted for roughly 6% of all ash trees at the two sites, but they could have contributed 55-65% of the total A. planipennis production at both sites. In contrast, 75- 80% of the ash trees at the outlier sites were < or =13 cm dbh, but these small trees could have contributed only < or =12% of the potential A. planipennis production at both sites. Our results, in combination with inventory data, can be used by regulatory officials and resource managers to estimate potential A. planipennis production and to compare options for reducing A. planipennis density and slowing the rate of spread for any area of interest.
Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek
2016-01-01
Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851
[Criteria of estimation of state forensic-medical expert activity administration].
Klevno, V A; Loban, I E
2008-01-01
Criteria of estimation of state forensic medical activity administration were systematized and their content was considered. New integral index of administration efficiency - index of technological efficiency - was developed and proved.
Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.
2010-01-01
With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.
Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems
Connolly, Patrick J.
2010-01-01
With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.
Taheri, Mahboobeh; Mohebbi, Ali
2008-08-30
In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Fog collecting biomimetic surfaces: Influence of microstructure and wettability.
Azad, M A K; Ellerbrok, D; Barthlott, W; Koch, K
2015-01-19
We analyzed the fog collection efficiency of three different sets of samples: replica (with and without microstructures), copper wire (smooth and microgrooved) and polyolefin mesh (hydrophilic, superhydrophilic and hydrophobic). The collection efficiency of the samples was compared in each set separately to investigate the influence of microstructures and/or the wettability of the surfaces on fog collection. Based on the controlled experimental conditions chosen here large differences in the efficiency were found. We found that microstructured plant replica samples collected 2-3 times higher amounts of water than that of unstructured (smooth) samples. Copper wire samples showed similar results. Moreover, microgrooved wires had a faster dripping of water droplets than that of smooth wires. The superhydrophilic mesh tested here was proved more efficient than any other mesh samples with different wettability. The amount of collected fog by superhydrophilic mesh was about 5 times higher than that of hydrophilic (untreated) mesh and was about 2 times higher than that of hydrophobic mesh.
Meeting future information needs for Great Lakes fisheries management
Christie, W.J.; Collins, John J.; Eck, Gary W.; Goddard, Chris I.; Hoenig, John M.; Holey, Mark; Jacobson, Lawrence D.; MacCallum, Wayne; Nepszy, Stephen J.; O'Gorman, Robert; Selgeby, James
1987-01-01
Description of information needs for management of Great Lakes fisheries is complicated by recent changes in biology and management of the Great Lakes, development of new analytical methodologies, and a transition in management from a traditional unispecies approach to a multispecies/community approach. A number of general problems with the collection and management of data and information for fisheries management need to be addressed (i.e. spatial resolution, reliability, computerization and accessibility of data, design of sampling programs, standardization and coordination among agencies, and the need for periodic review of procedures). Problems with existing data collection programs include size selectivity and temporal trends in the efficiency of fishing gear, inadequate creel survey programs, bias in age estimation, lack of detailed sea lamprey (Petromyzon marinus) wounding data, and data requirements for analytical techniques that are underutilized by managers of Great Lakes fisheries. The transition to multispecies and community approaches to fisheries management will require policy decisions by the management agencies, adequate funding, and a commitment to develop programs for collection of appropriate data on a long-term basis.
NASA Technical Reports Server (NTRS)
Baker, D. N.; Borovsky, Joseph E.; Benford, Gregory; Eilek, Jean A.
1988-01-01
A model of the inner portions of astrophysical jets is constructed in which a relativistic electron beam is injected from the central engine into the jet plasma. This beam drives electrostatic plasma wave turbulence, which leads to the collective emission of electromagnetic waves. The emitted waves are beamed in the direction of the jet axis, so that end-on viewing of the jet yields an extremely bright source (BL Lacertae object). The relativistic electron beam may also drive long-wavelength electromagnetic plasma instabilities (firehose and Kelvin-Helmholtz) that jumble the jet magnetic field lines. After a sufficient distance from the core source, these instabilities will cause the beamed emission to point in random directions and the jet emission can then be observed from any direction relative to the jet axis. This combination of effects may lead to the gap turn-on of astrophysical jets. The collective emission model leads to different estimates for energy transport and the interpretation of radio spectra than the conventional incoherent synchrotron theory.
Anderson, G F; Han, K C; Miller, R H; Johns, M E
1997-01-01
OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613
Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.
2016-01-01
Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Application of material flow analysis to municipal solid waste in Maputo City, Mozambique.
Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko
2017-03-01
Understanding waste flows within an urban area is important for identifying the main problems and improvement opportunities for efficient waste management. Assessment tools such as material flow analysis (MFA), an extensively applied method in waste management studies, provide a structured and objective evaluating process to characterize the waste management system best, to identify its shortcomings and to propose suitable strategies. This paper presents the application of MFA to municipal solid waste management (MSWM) in Maputo City, the capital of Mozambique. The results included the identification and quantification of the main input and output flows of the MSWM system in 2007 and 2014, from the generation, material recovery and collection, to final disposal and the unaccounted flow of municipal solid waste (MSW). We estimated that the waste generation increased from 397×10 3 tonnes in 2007 to 437×10 3 tonnes in 2014, whereas the total material recovery was insignificant in both years - 3×10 3 and 7×10 3 tonnes, respectively. As for collection and final disposal, the official collection of waste to the local dumpsite in the inner city increased about threefold, from 76×10 3 to 253×10 6 tonnes. For waste unaccounted for, the estimates indicated a reduction during the study period from 300×10 3 to 158×10 3 tonnes, due to the increase of collection services. The emphasized aspects include the need for practical waste reduction strategies, the opportunity to explore the potential for material recovery, careful consideration regarding the growing trend of illegal dumping and the urgency in phasing-out from the harmful practice of open dumping.
Maleghemi, Sylvester
2017-01-01
Background Data collection in Sub-Saharan Africa has traditionally been paper-based. However, the popularization of Android mobile devices and data capture software has brought paperless data management within reach. We used Open Data Kit (ODK) technology on Android mobile devices during a household survey in the Niger Delta region of Nigeria. Objective The aim of this study was to describe the pros and cons of deploying ODK for data management. Methods A descriptive cross-sectional household survey was carried out by 6 data collectors between April and May 2016. Data were obtained from 1706 persons in 601 households across 6 communities in 3 states in the Niger Delta. The use of Android mobile devices and ODK technology involved form building, testing, collection, aggregation, and download for data analysis. The median duration for data collection per household and per individual was 25.7 and 9.3 min, respectively. Results Data entries per device ranged from 33 (33/1706, 1.93%) to 482 (482/1706, 28.25%) individuals between 9 (9/601, 1.5%) and 122 (122/601, 20.3%) households. The most entries (470) were made by data collector 5. Only 2 respondents had data entry errors (2/1706, 0.12%). However, 73 (73/601, 12.1%) households had inaccurate date and time entries for when data collection started and ended. The cost of deploying ODK was estimated at US $206.7 in comparison with the estimated cost of US $466.7 for paper-based data management. Conclusions We found the use of mobile data capture technology to be efficient and cost-effective. As Internet services improve in Africa, we advocate their use as effective tools for health information management. PMID:29191798
Determination of GTA Welding Efficiencies
1993-03-01
continue on reverse if ncessary andidentify by block number) A method is developed for estimating welding efficiencies for moving arc GTAW processes...Dutta, Co-Advi r Department of Mechanical Engineering ii ABSTRACT A method is developed for estimating welding efficiencies for moving arc GTAW ...17 Figure 10. Miller Welding Equipment ............. ... 18 Figure 11. GTAW Torch Setup for Automatic Welding. . 19 Figure 12
Saiyasitpanich, Phirun; Keener, Tim C; Lu, Mingming; Khang, Soon-Jai; Evans, Douglas E
2006-12-15
Long-term exposures to diesel particulate matter (DPM) emissions are linked to increasing adverse human health effects due to the potential association of DPM with carcinogenicity. Current diesel vehicular particulate emission regulations are based solely upon total mass concentration, albeit it is the submicrometer particles that are highly respirable and the most detrimental to human health. In this study, experiments were performed with a tubular single-stage wet electrostatic precipitator (wESP) to evaluate its performance for the removal of number-based DPM emissions. A nonroad diesel generator utilizing a low sulfur diesel fuel (500 ppmw) operating under varying load conditions was used as a stationary DPM emission source. An electrical low-pressure impactor (ELPI) was used to quantify the number concentration distributions of diesel particles in the diluted exhaust gas at each tested condition. The wESP was evaluated with respect to different operational control parameters such as applied voltage, gas residence time, etc., to determine their effect on overall collection efficiency, as well as particle size dependent collection efficiency. The results show that the total DPM number concentrations in the untreated diesel exhaust are in the magnitude of approximately108/cm(3) at all engine loads with the particle diameter modes between 20 and 40 nm. The measured collection efficiency of the wESP operating at 70 kV based on total particle numbers was 86% at 0 kW engine load and the efficiency decreased to 67% at 75 kW due to a decrease in gas residence time and an increase in particle concentrations. At a constant wESP voltage of 70 kV and at 75 kW engine load, the variation of gas residence time within the wESP from approximately 0.1 to approximately 0.4 s led to a substantial increase in the collection efficiency from 67% to 96%. In addition, collection efficiency was found to be directly related to the applied voltage, with increasing collection efficiency measured for increases in applied voltage. The collection efficiency based on particle size had a minimum for sizes between 20 and 50 nm, but at optimal wESP operating conditions it was possible to remove over 90% of all particle sizes. A comparison of measured and calculated collection efficiencies reveals that the measured values are significantly higher than the predicted values based on the well-known Deutsch equation.
77 FR 65900 - Agency Information Collection Activities: Delivery Ticket
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-31
... keepers from the collection of information (total capital/startup costs and operations and maintenance.... Estimated Number of Total Annual Responses: 200,000. Estimated Time per Response: 20 minutes. Estimated...
Min, Ari; Park, Chang Gi; Scott, Linda D
2016-05-23
Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Lang, Stephanie; Hrbacek, Jan; Leong, Aidan; Klöck, Stephan
2012-05-01
Recently, there has been an increased interest in flattening-filter-free (FFF) linear accelerators. Removal of the filter results in available dose rates up to 24 Gy min-1 (for nominal energy 10 MV in depth of maximum dose, a source-surface distance of 100 cm and a field size of 10×10 cm2). To guarantee accurate relative and reference dosimetry for the FFF beams, we investigated the charge collection efficiency of multiple air-vented and one liquid ionization chamber for dose rates up to 31.9 Gy min-1. For flattened beams, the ion-collection efficiency of all air-vented ionization chambers (except for the PinPoint chamber) was above 0.995. By removing the flattening filter, we found a reduction in collection efficiency of approximately 0.5-0.9% for a 10 MV beam. For FFF beams, the Markus chamber showed the largest collection efficiency of 0.994. The observed collection efficiencies were dependent on dose per pulse, but independent of the pulse repetition frequency. Using the liquid ionization chamber, the ion-collection efficiency for flattened beams was above 0.990 for all dose rates. However, this chamber showed a low collection efficiency of 0.940 for the FFF 10 MV beam at a dose rate of 31.9 Gy min-1. All investigated air-vented ionization chambers can be reliably used for relative dosimetry of FFF beams. The order of correction for reference dosimetry is given in the manuscript. Due to their increased saturation in high dose rate FFF beams, liquid ionization chambers appear to be unsuitable for dosimetry within these contexts.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... Approved Information Collection for the Energy Efficiency and Conservation Block Grant Program Status... guidance concerning the Energy Efficiency and Conservation Block Grant (EECBG) Program is available for... Conservation Block Grant (EECBG) Program Status Report''; (3) Type of Review: Revision of currently approved...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
... a collection of information unless it displays a currently valid OMB control number, and no person... control number. Comments concerning the accuracy of the burden estimate(s) and any suggestions for... collection(s), contact Cathy Williams at (202) 418- 2918. SUPPLEMENTARY INFORMATION: OMB Control Number: 3060...
NASA Astrophysics Data System (ADS)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul Imhoff; Ramin Yazdani; Don Augenstein
Methane is an important contributor to global warming with a total climate forcing estimated to be close to 20% that of carbon dioxide (CO2) over the past two decades. The largest anthropogenic source of methane in the US is 'conventional' landfills, which account for over 30% of anthropogenic emissions. While controlling greenhouse gas emissions must necessarily focus on large CO2 sources, attention to reducing CH4 emissions from landfills can result in significant reductions in greenhouse gas emissions at low cost. For example, the use of 'controlled' or bioreactor landfilling has been estimated to reduce annual US greenhouse emissions by aboutmore » 15-30 million tons of CO2 carbon (equivalent) at costs between $3-13/ton carbon. In this project we developed or advanced new management approaches, landfill designs, and landfill operating procedures for bioreactor landfills. These advances are needed to address lingering concerns about bioreactor landfills (e.g., efficient collection of increased CH4 generation) in the waste management industry, concerns that hamper bioreactor implementation and the consequent reductions in CH4 emissions. Collectively, the advances described in this report should result in better control of bioreactor landfills and reductions in CH4 emissions. Several advances are important components of an Intelligent Bioreactor Management Information System (IBM-IS).« less
Mining Rare Events Data for Assessing Customer Attrition Risk
NASA Astrophysics Data System (ADS)
Au, Tom; Chin, Meei-Ling Ivy; Ma, Guangqin
Customer attrition refers to the phenomenon whereby a customer leaves a service provider. As competition intensifies, preventing customers from leaving is a major challenge to many businesses such as telecom service providers. Research has shown that retaining existing customers is more profitable than acquiring new customers due primarily to savings on acquisition costs, the higher volume of service consumption, and customer referrals. For a large enterprise, its customer base consists of tens of millions service subscribers, more often the events, such as switching to competitors or canceling services are large in absolute number, but rare in percentage, far less than 5%. Based on a simple random sample, popular statistical procedures, such as logistic regression, tree-based method and neural network, can sharply underestimate the probability of rare events, and often result a null model (no significant predictors). To improve efficiency and accuracy for event probability estimation, a case-based data collection technique is then considered. A case-based sample is formed by taking all available events and a small, but representative fraction of nonevents from a dataset of interest. In this article we showed a consistent prior correction method for events probability estimation and demonstrated the performance of the above data collection techniques in predicting customer attrition with actual telecommunications data.
Chang, Feng-Chih; Simcik, M.F.; Capel, P.D.
2011-01-01
This is the first report on the ambient levels of glyphosate, the most widely used herbicide in the United States, and its major degradation product, aminomethylphosphonic acid (AMPA), in air and rain. Concurrent, weekly integrated air particle and rain samples were collected during two growing seasons in agricultural areas in Mississippi and Iowa. Rain was also collected in Indiana in a preliminary phase of the study. The frequency of glyphosate detection ranged from 60 to 100% in both air and rain. The concentrations of glyphosate ranged from 3 and from <0.1 to 2.5 µg/L in air and rain samples, respectively. The frequency of detection and median and maximum concentrations of glyphosate in air were similar or greater to those of the other high-use herbicides observed in the Mississippi River basin, whereas its concentration in rain was greater than the other herbicides. It is not known what percentage of the applied glyphosate is introduced into the air, but it was estimated that up to 0.7% of application is removed from the air in rainfall. Glyphosate is efficiently removed from the air; it is estimated that an average of 97% of the glyphosate in the air is removed by a weekly rainfall ≥30 mm.
Nathan, Lucas M; Simmons, Megan; Wegleitner, Benjamin J; Jerde, Christopher L; Mahon, Andrew R
2014-11-04
The use of molecular surveillance techniques has become popular among aquatic researchers and managers due to the improved sensitivity and efficiency compared to traditional sampling methods. Rapid expansion in the use of environmental DNA (eDNA), paired with the advancement of molecular technologies, has resulted in new detection platforms and techniques. In this study we present a comparison of three eDNA surveillance platforms: traditional polymerase chain reaction (PCR), quantitative PCR (qPCR), and digital droplet PCR (ddPCR) in which water samples were collected over a 24 h time period from mesocosm experiments containing a population gradient of invasive species densities. All platforms reliably detected the presence of DNA, even at low target organism densities within the first hour. The two quantitative platforms (qPCR and ddPCR) produced similar estimates of DNA concentrations. The analyses completed with ddPCR was faster from sample collection through analyses and cost approximately half the expenditure of qPCR. Although a new platform for eDNA surveillance of aquatic species, ddPCR was consistent with more commonly used qPCR and a cost-effective means of estimating DNA concentrations. Use of ddPCR by researchers and managers should be considered in future eDNA surveillance applications.
Experimental studies and simulations of hydrogen pellet ablation in the stellarator TJ-II
NASA Astrophysics Data System (ADS)
Panadero, N.; McCarthy, K. J.; Koechl, F.; Baldzuhn, J.; Velasco, J. L.; Combs, S. K.; de la Cal, E.; García, R.; Hernández Sánchez, J.; Silvagni, D.; Turkin, Y.; TJ-II Team; W7-X Team
2018-02-01
Plasma core fuelling is a key issue for the development of steady-state scenarios in large magnetically-confined fusion devices, in particular for helical-type machines. At present, cryogenic pellet injection is the most promising technique for efficient fuelling. Here, pellet ablation and fuelling efficiency experiments, using a compact pellet injector, are carried out in electron cyclotron resonance and neutral beam injection heated plasmas of the stellarator TJ-II. Ablation profiles are reconstructed from light emissions collected by silicon photodiodes and a fast-frame camera system, under the assumptions that such emissions are loosely related to the ablation rate and that pellet radial acceleration is negligible. In addition, pellet particle deposition and fuelling efficiency are determined using density profiles provided by a Thomson scattering system. Furthermore, experimental results are compared with ablation and deposition profiles provided by the HPI2 pellet code, which is adapted here for the stellarators Wendelstein 7-X (W7-X) and TJ-II. Finally, the HPI2 code is used to simulate ablation and deposition profiles for pellets of different sizes and velocities injected into relevant W7-X plasma scenarios, while estimating the plasmoid drift and the fuelling efficiency of injections made from two W7-X ports.
Raynor, P C; Kim, B G; Ramachandran, G; Strommen, M R; Horns, J H; Streifel, A J
2008-02-01
Synthetic filters made from fibers carrying electrostatic charges and fiberglass filters that do not carry electrostatic charges are both utilized commonly in heating, ventilating, and air-conditioning (HVAC) systems. The pressure drop and efficiency of a bank of fiberglass filters and a bank of electrostatically charged synthetic filters were measured repeatedly for 13 weeks in operating HVAC systems at a hospital. Additionally, the efficiency with which new and used fiberglass and synthetic filters collected culturable biological particles was measured in a test apparatus. Pressure drop measurements adjusted to equivalent flows indicated that the synthetic filters operated with a pressure drop less than half that of the fiberglass filters throughout the test. When measured using total ambient particles, synthetic filter efficiency decreased during the test period for all particle diameters. For particles 0.7-1.0 mum in diameter, efficiency decreased from 92% to 44%. It is hypothesized that this reduction in collection efficiency may be due to charge shielding. Efficiency did not change significantly for the fiberglass filters during the test period. However, when measured using culturable biological particles in the ambient air, efficiency was essentially the same for new filters and filters used for 13 weeks in the hospital for both the synthetic and fiberglass filters. It is hypothesized that the lack of efficiency reduction for culturable particles may be due to their having higher charge than non-biological particles, allowing them to overcome the effects of charge shielding. The type of particles requiring capture may be an important consideration when comparing the relative performance of electrostatically charged synthetic and fiberglass filters. Electrostatically charged synthetic filters with high initial efficiency can frequently replace traditional fiberglass filters with lower efficiency in HVAC systems because properly designed synthetic filters offer less resistance to air flow. Although the efficiency of charged synthetic filters at collecting non-biological particles declined substantially with use, the efficiency of these filters at collecting biological particles remained steady. These findings suggest that the merits of electrostatically charged synthetic HVAC filters relative to fiberglass filters may be more pronounced if collection of biological particles is of primary concern.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Spatially explicit shallow landslide susceptibility mapping over large areas
Bellugi, Dino; Dietrich, William E.; Stock, Jonathan D.; McKean, Jim; Kazian, Brian; Hargrove, Paul
2011-01-01
Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so it has generated downscaled precipitation maps for the storm. To predict the corresponding pattern of shallow landslide susceptibility across the state, we have used the model Shalstab (a coupled steady state runoff and infinite slope stability model) which susceptibility spatially explicit estimates of relative potential instability. Such slope stability models that include the effects of subsurface runoff on potentially destabilizing pore pressure evolution require water routing and hence the definition of upslope drainage area to each potential cell. To calculate drainage area efficiently over a large area we developed a parallel framework to scale-up Shalstab and specifically introduce a new efficient parallel drainage area algorithm which produces seamless results. The single seamless shallow landslide susceptibility map for all of California was accomplished in a short run time, and indicates that much larger areas can be efficiently modelled. As landslide maps generally over predict the extent of instability for any given storm. Local empirical data on the fraction of predicted unstable cells that failed for observed rainfall intensity can be used to specify the likely extent of hazard for a given storm. This suggests that campaigns to collect local precipitation data and detailed shallow landslide location maps after major storms could be used to calibrate models and improve their use in hazard assessment for individual storms.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-21
...) of CAA and 40 CFR Part 1042, Subpart D). Estimated number of respondents: 200 (total, including..., depending on the program. Total estimated burden: 3,012 hours per year. Burden is defined at 5 CFR 1320.03(b) Total estimated cost: Estimated total annual costs: $200,000 (per year), includes an estimated $65,155...
77 FR 31616 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
...: Generic Clearance for the Collection Customer Satisfaction Surveys; Use: This collection of information is necessary to enable the Agency to garner customer and stakeholder feedback in an efficient, timely manner... customers and stakeholders will help ensure that users have an effective, efficient, and satisfying...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... of appropriate automated, electronic, mechanical, or other technological collection techniques or... business structure. (5) An estimate of the total number of respondents and the amount of time estimated for...
Korkut, Nafiz E; Yaman, Cevat; Küçükağa, Yusuf; Jaunich, Megan K; Demir, İbrahim
2018-02-01
This article estimates greenhouse gas emissions and global warming factors resulting from collection of municipal solid waste to the transfer stations or landfills in Istanbul for the year of 2015. The aim of this study is to quantify and compare diesel fuel consumption and estimate the greenhouse gas emissions and global warming factors associated with municipal solid waste collection of the 39 districts of Istanbul. Each district's greenhouse gas emissions resulting from the provision and combustion of diesel fuel was estimated by considering the number of collection trips and distances to municipal solid waste facilities. The estimated greenhouse gases and global warming factors for the districts varied from 61.2 to 2759.1 t CO 2 -eq and from 4.60 to 15.20 kg CO 2 -eq t -1 , respectively. The total greenhouse gas emission was estimated as 46.4E3 t CO 2 -eq. Lastly, the collection data from the districts was used to parameterise a collection model that can be used to estimate fuel consumption associated with municipal solid waste collection. This mechanistic model can then be used to predict future fuel consumption and greenhouse gas emissions associated with municipal solid waste collection based on projected population, waste generation, and distance to transfer stations and landfills. The greenhouse gas emissions can be reduced by decreasing the trip numbers and trip distances, building more transfer stations around the city, and making sure that the collection trucks are full in each trip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Runsheng; Yu, Yamei
2010-09-15
A new design concept, called one axis three positions sun-tracking polar-axis aligned CPCs (3P-CPCs, in short), was proposed and theoretically studied in this work for photovoltaic applications. The proposed trough-like CPC is oriented in the polar-axis direction, and the aperture is daily adjusted eastward, southward, and westward in the morning, noon and afternoon, respectively, by rotating the CPC trough, to ensure efficient collection of beam radiation nearly all day. To investigate the optical performance of such CPCs, an analytical mathematical procedure is developed to estimate daily and annual solar gain captured by such CPCs based on extraterrestrial radiation and monthlymore » horizontal radiation. Results show that the acceptance half-angle of 3P-CPCs is a unique parameter to determine their optical performance according to extraterrestrial radiation, and the annual solar gain stays constant if the acceptance half-angle, {theta}{sub a}, is less than one third of {omega}{sub 0,min}, the sunset hour angle in the winter solstice, otherwise decreases with the increase of {theta}{sub a}. For 3P-CPCs used in China, the annual solar gain, depending on the climatic conditions in site, decreased with the acceptance half-angle, but such decrease was slow for the case of {theta}{sub a}{<=}{omega}{sub 0,min}/3, indicating that the acceptance half-angle should be less than one third of {omega}{sub 0,min} for maximizing annual energy collection. Compared to fixed east-west aligned CPCs (EW-CPCs) with a yearly optimal acceptance half-angle, the fixed south-facing polar-axis aligned CPCs (1P-CPCs) with the same acceptance half-angle as the EW-CPCs annually collected about 65-74% of that EW-CPCs did, whereas 3P-CPCs annually collected 1.26-1.45 times of that EW-CPCs collected, indicating that 3P-CPCs were more efficient for concentrating solar radiation onto their coupling solar cells. (author)« less
Potentials for Platooning in U.S. Highway Freight Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muratori, Matteo; Holden, Jacob; Lammert, Michael
2017-03-28
Smart technologies enabling connection among vehicles and between vehicles and infrastructure as well as vehicle automation to assist human operators are receiving significant attention as a means for improving road transportation systems by reducing fuel consumption - and related emissions - while also providing additional benefits through improving overall traffic safety and efficiency. For truck applications, which are currently responsible for nearly three-quarters of the total U.S. freight energy use and greenhouse gas (GHG) emissions, platooning has been identified as an early feature for connected and automated vehicles (CAVs) that could provide significant fuel savings and improved traffic safety andmore » efficiency without radical design or technology changes compared to existing vehicles. A statistical analysis was performed based on a large collection of real-world U.S. truck usage data to estimate the fraction of total miles that are technically suitable for platooning. In particular, our analysis focuses on estimating 'platoonable' mileage based on overall highway vehicle use and prolonged high-velocity traveling, and established that about 65% of the total miles driven by combination trucks from this data sample could be driven in platoon formation, leading to a 4% reduction in total truck fuel consumption. This technical potential for 'platoonable' miles in the United States provides an upper bound for scenario analysis considering fleet willingness and convenience to platoon as an estimate of overall benefits of early adoption of connected and automated vehicle technologies. A benefit analysis is proposed to assess the overall potential for energy savings and emissions mitigation by widespread implementation of highway platooning for trucks.« less
Advanced uncertainty modelling for container port risk analysis.
Alyami, Hani; Yang, Zaili; Riahi, Ramin; Bonsall, Stephen; Wang, Jin
2016-08-13
Globalization has led to a rapid increase of container movements in seaports. Risks in seaports need to be appropriately addressed to ensure economic wealth, operational efficiency, and personnel safety. As a result, the safety performance of a Container Terminal Operational System (CTOS) plays a growing role in improving the efficiency of international trade. This paper proposes a novel method to facilitate the application of Failure Mode and Effects Analysis (FMEA) in assessing the safety performance of CTOS. The new approach is developed through incorporating a Fuzzy Rule-Based Bayesian Network (FRBN) with Evidential Reasoning (ER) in a complementary manner. The former provides a realistic and flexible method to describe input failure information for risk estimates of individual hazardous events (HEs) at the bottom level of a risk analysis hierarchy. The latter is used to aggregate HEs safety estimates collectively, allowing dynamic risk-based decision support in CTOS from a systematic perspective. The novel feature of the proposed method, compared to those in traditional port risk analysis lies in a dynamic model capable of dealing with continually changing operational conditions in ports. More importantly, a new sensitivity analysis method is developed and carried out to rank the HEs by taking into account their specific risk estimations (locally) and their Risk Influence (RI) to a port's safety system (globally). Due to its generality, the new approach can be tailored for a wide range of applications in different safety and reliability engineering and management systems, particularly when real time risk ranking is required to measure, predict, and improve the associated system safety performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hossain, Md. Kamrul; Kamil, Anton Abdulbasah; Baten, Md. Azizul; Mustafa, Adli
2012-01-01
The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989–2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500
On the mean radiative efficiency of accreting massive black holes in AGNs and QSOs
NASA Astrophysics Data System (ADS)
Zhang, XiaoXia; Lu, YouJun
2017-10-01
Radiative efficiency is an important physical parameter that describes the fraction of accretion material converted to radiative energy for accretion onto massive black holes (MBHs). With the simplest Sołtan argument, the radiative efficiency of MBHs can be estimated by matching the mass density of MBHs in the local universe to the accreted mass density by MBHs during AGN/QSO phases. In this paper, we estimate the local MBH mass density through a combination of various determinations of the correlations between the masses of MBHs and the properties of MBH host galaxies, with the distribution functions of those galaxy properties. We also estimate the total energy density radiated by AGNs and QSOs by using various AGN/QSO X-ray luminosity functions in the literature. We then obtain several hundred estimates of the mean radiative efficiency of AGNs/QSOs. Under the assumption that those estimates are independent of each other and free of systematic effects, we apply the median statistics as described by Gott et al. and find the mean radiative efficiency of AGNs/QSOs is ɛ = 0.105 -0.008 +0.006 , which is consistent with the canonical value 0.1. Considering that about 20% Compton-thick objects may be missed from current available X-ray surveys, the true mean radiative efficiency may be actually 0.12.
Davis, M E; Rutledge, J J; Cundiff, L V; Hauser, E R
1983-10-01
Several measures of life cycle cow efficiency were calculated using weights and individual feed consumptions recorded on 160 dams of beef, dairy and beef X dairy breeding and their progeny. Ratios of output to input were used to estimate efficiency, where outputs included weaning weights of progeny plus salvage value of the dam and inputs included creep feed consumed by progeny plus feed consumed by the dam over her entire lifetime. In one approach to estimating efficiency, inputs and outputs were weighted by probabilities that were a function of the cow herd age distribution and percentage calf crop in a theoretical herd. The second approach to estimating cow efficiency involved dividing the sum of the weights by the sum of the feed consumption values, with all pieces of information being given equal weighting. Relationships among efficiency estimates and various traits of dams and progeny were examined. Weights, heights, and weight:height ratios of dams at 240 d of age were not correlated significantly with subsequent efficiency of calf production, indicating that indirect selection for lifetime cow efficiency at an early age based on these traits would be ineffective. However, females exhibiting more efficient weight gains from 240 d to first calving tended to become more efficient dams. Correlations of efficiency with weight of dam at calving and at weaning were negative and generally highly significant. Height at withers was negatively related to efficiency. Ratio of weight to height indicated that fatter dams generally were less efficient. The effect of milk production on efficiency depended upon the breed combinations involved. Dams calving for the first time at an early age and continuing to calve at short intervals were superior in efficiency. Weaning rate was closely related to life cycle efficiency. Large negative correlations between efficiency and feed consumption of dams were observed, while correlations of efficiency with progeny weights and feed consumptions in individual parities tended to be positive though nonsignificant. However, correlations of efficiency with accumulative progeny weights and feed consumptions generally were significant.
Geospatial Representation, Analysis and Computing Using Bandlimited Functions
2010-02-19
navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed
Estimate of net trophic transfer efficiency of PCBs to Lake Michigan lake trout from their prey
Madenjian, Charles P.; Hesselberg, Robert J.; DeSorcie, Timothy J.; Schmidt, Larry J.; Stedman, Ralph M.; Quintal, Richard T.; Begnoche, Linda J.; Passino-Reader, Dora R.
1998-01-01
Most of the polychlorinated biphenyl (PCB) body burden accumulated by lake trout (Salvelinus namaycush) from the Laurentian Great Lakes is from their food. We used diet information, PCB determinations in both lake trout and their prey, and bioenergetics modeling to estimate the efficiency with which Lake Michigan lake trout retain PCBs from their food. Our estimates were the most reliable estimates to date because (a) the lake trout and prey fish sampled during our study were all from the same vicinity of the lake, (b) detailed measurements were made on the PCB concentrations of both lake trout and prey fish over wide ranges in fish size, and (c) lake trout diet was analyzed in detail over a wide range of lake trout size. Our estimates of net trophic transfer efficiency of PCBs to lake trout from their prey averaged from 0.73 to 0.89 for lake trout between the ages of 5 and 10 years old. There was no evidence of an upward or downward trend in our estimates of net trophic transfer efficiency for lake trout between the ages of 5 and 10 years old, and therefore this efficiency appeared to be constant over the duration of the lake trout's adult life in the lake. On the basis of our estimtes, lake trout retained 80% of the PCBs that are contained within their food.
Mixing Efficiency in the Ocean.
Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E
2018-01-03
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Exergetic analysis of autonomous power complex for drilling rig
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Karabuta, V. S.
2017-10-01
The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.
Mixing Efficiency in the Ocean
NASA Astrophysics Data System (ADS)
Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.
2018-01-01
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Sample collection of virulent and non-virulent B. anthracis and Y. pestis for bioforensics analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong-geller, Elizabeth; Valdez, Yolanda E; Shou, Yulin
2009-01-01
Validated sample collection methods are needed for recovery of microbial evidence in the event of accidental or intentional release of biological agents into the environment. To address this need, we evaluated the sample recovery efficiencies of two collection methods -- swabs and wipes -- for both non-virulent and virulent strains of B. anthracis and Y. pestis from four types of non-porous surfaces: two hydrophilic surfaces, stainless steel and glass, and two hydrophobic surfaces, vinyl and plastic. Sample recovery was quantified using Real-time qPCR to assay for intact DNA signatures. We found no consistent difference in collection efficiency between swabs ormore » wipes. Furthermore, collection efficiency was more surface-dependent for virulent strains than non-virulent strains. For the two non-virulent strains, B. anthracis Sterne and Y. pestis A1122, collection efficiency was approximately 100% and 1 %, respectively, from all four surfaces. In contrast, recovery of B. anthracis Ames spores and Y. pestis C092 from vinyl and plastic was generally lower compared to collection from glass or stainless steel, suggesting that surface hydrophobicity may playa role in the strength of pathogen adhesion. The surface-dependent collection efficiencies observed with the virulent strains may arise from strain-specific expression of capsular material or other cell surface receptors that alter cell adhesion to specific surfaces. These findings contribute to validation of standard bioforensics procedures and emphasize the importance of specific strain and surface interactions in pathogen detection.« less
Digital voice recording: An efficient alternative for data collection
Mark A. Rumble; Thomas M. Juntti; Thomas W. Bonnot; Joshua J. Millspaugh
2009-01-01
Study designs are usually constrained by logistical and budgetary considerations that can affect the depth and breadth of the research. Little attention has been paid to increasing the efficiency of data recording. Digital voice recording and translation may offer improved efficiency of field personnel. Using this technology, we increased our data collection by 55...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
.... Estimated Total urden Hours: 222,924. Estimated Cost (Operation and Maintenance): $0. IV. Public... costs) is minimal, collection instruments are clearly understood, and OSHA's estimate of the information... of OSHA's estimate of the burden (time and costs) of the information collection requirements...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... retain a benefit. Frequency of Collection: Ongoing. Estimated Annual Number of Respondents: 750. Estimated Total Annual Responses: 750. Estimated Time Per Response: 1 hour Estimated Total Annual Burden..., purchased, sold, or otherwise transferred; and (2) the dates of these transactions. Accredited wildlife...
How EIA Estimates Natural Gas Production
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... effort to reduce paperwork and respondent burden, invites the general public and other Federal agencies.... Estimated response time per survey: 1 hour. Estimated number of respondents per survey: 850 hours. Total Annual Burden: 12,500 hours. General Description of Collection: The information collected in these...
On the wind-induced undercatch in rainfall measurement using CFD-based simulations
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca
2016-04-01
The reliability of liquid atmospheric precipitation measurements is a basic requirement since rainfall data represent the fundamental input variables of many scientific applications (hydrologic models, weather forecasting data assimilation, climate change studies, calibration of weather radar, etc.). The scientific community and the National Meteorological Services worldwide are facing the issue of improving the accuracy of precipitation measurements, with an increased focus on retrieving the information at a high temporal resolution. The rainfall intensity is indeed fundamental information for the precise quantification of the markedly time-varying behavior of precipitation events. Environmental conditions have a relevant impact on the rain collection/sensing efficiency. Among other effects, wind is recognized as a major source of underestimation since it reduces the collection efficiency of the catching-type gauges (Nespor and Sevruk, 1999), the most common type of instruments used worldwide in the national observation networks. The collection efficiency is usually obtained by comparing the rainfall amounts measured by the gauge with the reference, which was defined by EN-13798 standard (CEN, 2002) as a gauge placed below the ground level inside a pit. A lot of scatter can be observed for a given wind speed, which is mainly caused by comparability issues among the tested gauges. An additional source of uncertainty is the drops size distribution (DSD) of the rain, which varies on an event-by-event basis. The goal of this study is to understand the role of the physical characteristics of precipitation particles on the wind-induced rainfall underestimation observed for catching-type gauges. To address this issue, a detailed analysis of the flow field in the vicinity of the gauge is conducted using time-averaged computational fluid dynamics (CFD) simulations (Colli et al., 2015). Using a Lagrangian model, which accounts for the hydrodynamic behavior of liquid particles in the atmosphere, droplets trajectories are calculated to obtain the collection efficiency associated with different drop size distribution and varying the wind speed. The main benefit of investigating this error by means of CFD simulations is the possibility to single out the prevailing environmental factors from the instrumental performance of the gauges under analysis. The preliminary analysis shows the variations in the catch efficiency due to the horizontal wind speeds and the DSD. Overall, this study contributes to a better understanding of the environmental sources of uncertainty in rainfall measurements. References: Colli, M., R. Rasmussen, J. M. Theriault, L. G. Lanza, C. B. Baker & J. Kochendorfer (2015) An Improved Trajectory Model to Evaluate the Collection Performance of Snow Gauges. Journal of Applied Meteorology and Climatology, 54, 1826-1836 Nespor, V. and Sevruk, B. (1999). Estimation of wind-induced error of rainfall gauge measurements using a numerical simulation. J. Atmos. Ocean. Tech, 16(4), 450-464. CEN (2002). EN 13798:2002 Hydrometry - Specification for a reference raingauge pit. European Committee for Standardization.
Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawding, Dan; Hillson, Todd D.
2003-11-15
Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less
76 FR 60853 - Agency Information Collection Activities: Documents Required Aboard Private Aircraft
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-30
... respondents or record keepers from the collection of information (a total of capital/startup costs and.... Estimated Number of Respondents: 120,000. Estimated Number of Annual Responses: 120,000. Estimated Time per...
Prinja, Shankar; Manchanda, Neha; Aggarwal, Arun Kumar; Kaur, Manmeet; Jeet, Gursimer; Kumar, Rajesh
2013-12-01
Various models of referral transport services have been introduced in different States in India with an aim to reduce maternal and infant mortality. Most of the research on referral transport has focussed on coverage, quality and timeliness of the service with not much information on cost and efficiency. This study was undertaken to analyze the cost of a publicly financed and managed referral transport service model in three districts of Haryana State, and to assess its cost and technical efficiency. Data on all resources spent for delivering referral transport service, during 2010, were collected from three districts of Haryana State. Costs incurred at State level were apportioned using appropriate methods. Data Envelopment Analysis (DEA) technique was used to assess the technical efficiency of ambulances. To estimate the efficient scale of operation for ambulance service, the average cost was regressed on kilometres travelled for each ambulance station using a quadratic regression equation. The cost of referral transport per year varied from [symbol: see text] 5.2 million in Narnaul to [symbol: see text] 9.8 million in Ambala. Salaries (36-50%) constituted the major cost. Referral transport was found to be operating at an average efficiency level of 76.8 per cent. Operating an ambulance with a patient load of 137 per month was found to reduce unit costs from an average [symbol: see text] 15.5 per km to [symbol: see text] 9.57 per km. Our results showed that the publicly delivered referral transport services in Haryana were operating at an efficient level. Increasing the demand for referral transport services among the target population represents an opportunity for further improving the efficiency of the underutilized ambulances.
Prinja, Shankar; Manchanda, Neha; Aggarwal, Arun Kumar; Kaur, Manmeet; Jeet, Gursimer; Kumar, Rajesh
2013-01-01
Background & objectives: Various models of referral transport services have been introduced in different States in India with an aim to reduce maternal and infant mortality. Most of the research on referral transport has focussed on coverage, quality and timeliness of the service with not much information on cost and efficiency. This study was undertaken to analyze the cost of a publicly financed and managed referral transport service model in three districts of Haryana State, and to assess its cost and technical efficiency. Methods: Data on all resources spent for delivering referral transport service, during 2010, were collected from three districts of Haryana State. Costs incurred at State level were apportioned using appropriate methods. Data Envelopment Analysis (DEA) technique was used to assess the technical efficiency of ambulances. To estimate the efficient scale of operation for ambulance service, the average cost was regressed on kilometres travelled for each ambulance station using a quadratic regression equation. Results: The cost of referral transport per year varied from ₹5.2 million in Narnaul to ₹9.8 million in Ambala. Salaries (36-50%) constituted the major cost. Referral transport was found to be operating at an average efficiency level of 76.8 per cent. Operating an ambulance with a patient load of 137 per month was found to reduce unit costs from an average ₹ 15.5 per km to ₹ 9.57 per km. Interpretation & conclusions: Our results showed that the publicly delivered referral transport services in Haryana were operating at an efficient level. Increasing the demand for referral transport services among the target population represents an opportunity for further improving the efficiency of the underutilized ambulances. PMID:24521648