75 FR 41556 - Proposed Collection Renewal; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-16
... global education in the classroom. Estimated annual number of respondents: 300. Estimated average time to... the annual World Wise Schools Conference. The information is used as a record of attendance. 2. Title... global education in the classroom. Estimated annual number of responses: 300. Estimated average time to...
An Estimate of North Atlantic Basin Tropical Cyclone Activity for 2008
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2008-01-01
The statistics of North Atlantic basin tropical cyclones for the interval 1945-2007 are examined and estimates are given for the frequencies of occurrence of the number of tropical cyclones, number of hurricanes, number of major hurricanes, number of category 4/5 hurricanes, and number of U.S. land-falling hurricanes for the 2008 hurricane season. Also examined are the variations of peak wind speed, average peak wind speed per storm, lowest pressure, average lowest pressure per storm, recurrence rate and duration of extreme events (El Nino and La Nina), the variation of 10-yr moving averages of parametric first differences, and the association of decadal averages of frequencies of occurrence of North Atlantic basin tropical cyclones against decadal averages of Armagh Observatory, Northern Ireland, annual mean temperature (found to be extremely important for number of tropical cyclones and number of hurricanes). Because the 2008 hurricane season seems destined to be one that is non-El Nino-related and is a post-1995 season, estimates of the frequencies of occurrence for the various subsets of storms should be above long-term averages.
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
Quantification of hookworm ova from wastewater matrices using quantitative PCR.
Gyawali, Pradip; Ahmed, Warish; Sidhu, Jatinder P; Jagals, Paul; Toze, Simon
2017-07-01
A quantitative PCR (qPCR) assay was used to quantify Ancylostoma caninum ova in wastewater and sludge samples. We estimated the average gene copy numbers for a single ovum using a mixed population of ova. The average gene copy numbers derived from the mixed population were used to estimate numbers of hookworm ova in A. caninum seeded and unseeded wastewater and sludge samples. The newly developed qPCR assay estimated an average of 3.7×10 3 gene copies per ovum, which was then validated by seeding known numbers of hookworm ova into treated wastewater. The qPCR estimated an average of (1.1±0.1), (8.6±2.9) and (67.3±10.4) ova for treated wastewater that was seeded with (1±0), (10±2) and (100±21) ova, respectively. The further application of the qPCR assay for the quantification of A. caninum ova was determined by seeding a known numbers of ova into the wastewater matrices. The qPCR results indicated that 50%, 90% and 67% of treated wastewater (1L), raw wastewater (1L) and sludge (~4g) samples had variable numbers of A. caninum gene copies. After conversion of the qPCR estimated gene copy numbers to ova for treated wastewater, raw wastewater, and sludge samples, had an average of 0.02, 1.24 and 67 ova, respectively. The result of this study indicated that qPCR can be used for the quantification of hookworm ova from wastewater and sludge samples; however, caution is advised in interpreting qPCR generated data for health risk assessment. Copyright © 2017. Published by Elsevier B.V.
77 FR 40347 - Commission Information Collection Activities (FERC-552); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-09
... Transactions Number of Average burden Number of responses per Total number of hours per Estimated total... facilitate price transparency in markets for the sale or transportation of physical natural gas in interstate... market participants. Estimate of Annual Burden \\3\\: The Commission estimates the total Public Reporting...
77 FR 26487 - Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
... information is estimated to average 350 hours per response. Respondents: Business or other for-profit; not-for...-for-profit institutions; business or other for- profit entities. Estimated Number of Respondents: 1... response. Respondents: Businesses or other for profits. Estimated Number of Respondents: 38. Estimated...
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Hofvind, Solveig; Román, Marta; Sebuødegård, Sofie; Falk, Ragnhild S
2016-12-01
To compute a ratio between the estimated numbers of lives saved from breast cancer death and the number of women diagnosed with a breast cancer that never would have been diagnosed during the woman's lifetime had she not attended screening (epidemiologic over-diagnosis) in the Norwegian Breast Cancer Screening Program. The Norwegian Breast Cancer Screening Program invites women aged 50-69 to biennial mammographic screening. Results from published studies using individual level data from the programme for estimating breast cancer mortality and epidemiologic over-diagnosis comprised the basis for the ratio. The mortality reduction varied from 36.8% to 43% among screened women, while estimates on epidemiologic over-diagnosis ranged from 7% to 19.6%. We computed the average estimates for both values. The benefit-detriment ratio, number of lives saved, and number of women over-diagnosed were computed for different scenarios of reduction in breast cancer mortality and epidemiologic over-diagnosis. For every 10,000 biennially screened women, followed until age 79, we estimated that 53-61 (average 57) women were saved from breast cancer death, and 45-126 (average 82) were over-diagnosed. The benefit-detriment ratio using average estimates was 1:1.4, indicating that the programme saved about one life per 1-2 women with epidemiologic over-diagnosis. The benefit-detriment ratio estimates of the Norwegian Breast Cancer Screening Program, expressed as lives saved from breast cancer death and epidemiologic over-diagnosis, should be interpreted with care due to substantial uncertainties in the estimates, and the differences in the scale of values of the events compared. © The Author(s) 2016.
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
[Preliminary clinical experience of single incision laparoscopic colorectal surgery].
Wu, S D; Han, J Y
2016-06-01
Objective: To discuss the preliminary experience of single incision laparoscopic colorectal surgery. Methods: The clinical data and surgical outcomes of 104 selected patients who underwent single incision laparoscopic colorectal surgery in the 2 nd Department of General Surgery, Shengjing Hospital of China Medical University from January 2010 to September 2015 were retrospectively analyzed. There were 62 male and 42 female patients, aging from 21 to 87 years with a mean of (61±12) years. Eighty-five patients were diagnosed with malignancy while the rest 19 cases were benign diseases. All the procedures were performed by the same surgeon using the rigid laparoscopic instruments. Surgical and oncological outcomes were analyzed in 4 kinds of procedures which are over 5 cases respectively, including low anterior resection, abdominoperineal resection, radical right colon resection and radical sigmoidectomy. Results: Single incision laparoscopic colorectal surgery was performed in 104 selected patients and was successfully managed in 99 cases with a total conversion rate of 4.8%. Radical procedures for malignancy in cases with the number of patients more than 5 were performed for 74 cases. For low anterior resection, 35 cases with an average surgical time of (191±57) minutes, average estimated blood loss of (117±72) ml and average number of harvested lymph nodes of 14.6±1.1. For abdominoperineal resection, 9 cases with an average surgical time of (226±54) minutes, average estimated blood loss of (194±95) ml and average number of harvested lymph nodes of 14.1±1.5. For radical right colon resection, 16 cases with an average surgical time of (222±62) minutes, average estimated blood loss of (142±68) ml and average number of harvested lymph nodes of 15.4±2.4. For radical sigmoidectomy, 14 cases with an average surgical time of (159±32) minutes, average estimated blood loss of (94±33) ml and average number of harvested lymph nodes of 13.9±1.5. The overall intraoperative complication rate was 2.7% (2 cases) and postoperative complication rate was 8.1% (6 cases) in these 74 cases. Conclusion: Single incision laparoscopic colorectal surgery is safe and feasible with acceptable surgical outcomes and cosmetic benefits in the hands of skilled laparoscopic surgeon in well-selected patients.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-11
... form relates to a budget or estimate of the legal fees, costs, and expenses that outside counsel would... estimates of the average number of respondents, burden, and total annual cost appear below. The estimated... cost by multiplying its estimate of the number of respondents (100) by the burden (2 hours) and...
Estimates of the Average Number of Times Students Say They Cheated
ERIC Educational Resources Information Center
Liebler, Robert
2017-01-01
Data from published studies is used to recover information about the sample mean self-reported number of times cheated by college students. The sample means were estimated by fitting distributions to the reported data. The few estimated sample means thus recovered were roughly 2 or less.
Isozyme variation and linkage in six conifer species
M. Thompson Conkle
1981-01-01
Isozymes of female gametophyte tissue were analyzed for allelic variation in knobcone, lodgepole, loblolly, Jeffrey, and sugar pines and in Douglas-fir. Linkage was studied in the five pines. The average number of alleles and average herozygosity per enzyme locus were estimated. Knobcone pine ranked lowest among the six species in number of alleles and average...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... and local law. The changes to the form are to allow the applicant to pay the transfer tax by credit or...) An estimate of the total number of respondents and the amount of time estimated for an average respondent to respond: It is estimated that 65,085 respondents will take an average of 1.68 hours to complete...
Estimation of the rain signal in the presence of large surface clutter
NASA Technical Reports Server (NTRS)
Ahamad, Atiq; Moore, Richard K.
1994-01-01
The principal limitation for the use of a spaceborne imaging SAR as a rain radar is the surface-clutter problem. Signals may be estimated in the presence of noise by averaging large numbers of independent samples. This method was applied to obtain an estimate of the rain echo by averaging a set of N(sub c) samples of the clutter in a separate measurement and subtracting the clutter estimate from the combined estimate. The number of samples required for successful estimation (within 10-20%) for off-vertical angles of incidence appears to be prohibitively large. However, by appropriately degrading the resolution in both range and azimuth, the required number of samples can be obtained. For vertical incidence, the number of samples required for successful estimation is reasonable. In estimating the clutter it was assumed that the surface echo is the same outside the rain volume as it is within the rain volume. This may be true for the forest echo, but for convective storms over the ocean the surface echo outside the rain volume is very different from that within. It is suggested that the experiment be performed with vertical incidence over forest to overcome this limitation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... form relates to a budget or estimate of the legal fees, costs, and expenses that outside counsel would... average number of respondents, burden, and total annual cost appear below. The estimated number of... and the representations and certifications form. The NCUA estimated the total annual cost by...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
...: Nonimmigrant Treaty Trader/Investor Application ACTION: Notice of request for public comment and submission to.../Investor Application OMB Control Number: OMB-1405-0101 Type of Request: Extension of a Currently Approved... Investor Estimated Number of Respondents: 41,752 Estimated Number of Responses: 41,752 Average Time per...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
...) has submitted to the Office of Management and Budget (OMB) a request for review and approval of the... investigators. The annual reporting burden is as follows: Total Estimated Number of Respondents: 94,326; Estimated Number of Responses per Respondent: 1; Average Burden Hours Per Response: 21.75; Estimated Total...
78 FR 46980 - Prescription Drug User Fee Rates for Fiscal Year 2014
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... 116.333 The FY 2014 application fee is estimated by dividing the average number of full applications... dividing the adjusted total fee revenue to be derived from establishments ($252,342,667) by the estimated... use this number for its FY 2014 estimate. The FY 2014 product fee rate is determined by dividing the...
ERIC Educational Resources Information Center
Matlock, Ki Lynn; Turner, Ronna
2016-01-01
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... Securities Dealer. Agency form number: FR MSD-4; FR MSD-5. OMB control number: 7100-0100; 7100-0101... are municipal securities dealers. Estimated annual reporting hours: FR MSD-4, 20 hours; FR MSD-5, 13 hours. Estimated average hours per response: FR MSD-4, 1 hour; FR MSD-5, 0.25 hours. Number of...
Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick
2015-08-01
Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.
NASA Astrophysics Data System (ADS)
Park, Ji Young; Raynor, Peter C.; Maynard, Andrew D.; Eberly, Lynn E.; Ramachandran, Gurumurthy
Recent research has suggested that the adverse health effects caused by nanoparticles are associated with their surface area (SA) concentrations. In this study, SA was estimated in two ways using number and mass concentrations and compared with SA (SA meas) measured using a diffusion charger (DC). Aerosol measurements were made twice: once starting in October 2002 and again starting in December 2002 in Mysore, India in residences that used kerosene or liquefied petroleum gas (LPG) for cooking. Mass, number, and SA concentrations and size distributions by number were measured in each residence. The first estimation method (SA PSD) used the size distribution by number to estimate SA. The second method (SA INV) used a simple inversion scheme that incorporated number and mass concentrations while assuming a lognormal size distribution with a known geometrical standard deviation. SA PSD was, on average, 2.4 times greater (range = 1.6-3.4) than SA meas while SA INV was, on average, 6.0 times greater (range = 4.6-7.7) than SA meas. The logarithms of SA PSD and SA INV were found to be statistically significant predictors of the logarithm of SA meas. The study showed that particle number and mass concentration measurements can be used to estimate SA with a correction factor that ranges between 2 and 6.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects
2017-02-22
AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0183 5c. PROGRAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, M; Schmitt, J; Sarfaraz, M
2015-06-15
Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less
Radiation exposure from consumer products and miscellaneous sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-01-01
This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
.... families. Estimated annual reporting hours: 8,938 hours. Estimated average hours per response: Pretest, 75 minutes; and Main survey, 75 minutes. Number of respondents: Pretest, 150; and Main survey, 7,000. General...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... Municipal Securities Dealer. Agency form number: FR MSD-4 and FR MSD-5. OMB control number: 7100-0100 and... activities as municipal securities dealers. Estimated annual reporting hours: FR MSD-4, 48 hours; and FR MSD-5, 36 hours. Estimated average hours per response: FR MSD-4, 1 hour; and FR MSD- 5, 0.25 hours...
77 FR 40889 - Proposed Data Collections Submitted for Public Comment and Recommendations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-11
... evaluation study will be conducted using a group-randomized controlled trial multi-time series design. Four... their time. Estimated Annualized Burden Hours Number of Average burden Respondents Number of responses...
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.
78 FR 25455 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... Assessment Survey, previously approved under information collection number OMB CN: 0970-0379, expires on 7/31... American nonprofit organizations, and Tribal Colleges and Universities. Annual Burden Estimates Number of Average burden Instrument Number of responses per hours per Total burden respondents respondent response...
O'Hara, Mackie
2017-05-01
Recently, studies have interpreted regular spacing and average number of perikymata between dental enamel defects in orangutans to reflect seasonal episodes of physiological stress. To estimate the amount of time between developmental defects (enamel hypoplasia), studies have relied on perikymata counts. Unfortunately, perikymata are frequently not continuously visible between defects, significantly reducing data sets. A method is presented here for estimating the number of perikymata between defects using standard perikymata profiles (SPP) that allow the number of perikymata between all pairs of defects across a tooth to be analyzed. The SPP method should allow the entire complement of defects to be analyzed within the context of an individual's crown formation time. The average number of perikymata were established per decile and charted to create male and female Pongo pygmaeus SPPs. The position of the beginning of each defect was recorded for lower canines from males (n = 6) and females (n = 17). The number of perikymata between defects estimated by the SPP was compared to the actual count (where perikymata were continuously visible). The number of perikymata between defects estimated by the SPPs was accurate within three perikymata and highly correlated with the actual counts, significantly increasing the number of analyzable defect pairs. SPPs allow all defect pairs to be included in studies of defect timing, not just those with continuously visible perikymata. Establishing an individual's entire complement of dental defects makes it possible to calculate the regularity (and potential seasonality) of defects. © 2017 Wiley Periodicals, Inc.
75 FR 5320 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-02
..., identified by FR 4004, FR MSD-4, FR MSD-5, FR G-FIN, or FR G-FINW, by any of the following methods: Agency... form number: FR MSD-4 and FR MSD-5. OMB control number: 7100-0100 and 7100-0101. Frequency: On occasion... dealers. Estimated annual reporting hours: FR MSD-4, 48 hours; and FR MSD-5, 36 hours. Estimated average...
Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I
2008-01-01
Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.
Number of Radiation Oncologists in Korea, Adequate or Surplus?
2006-01-01
Purpose The purpose of this research is to discern and address the issues related to the radiation oncology manpower supply and its distribution. Materials and Methods The statistical data of the Annual Report of the Korea Central Cancer Registry (KCCR) from 1997 to 2002 and the Annual Report of the Korean Society of Radiation Oncology (KOSTRO) from 1997 to 2004 were used to predict the status of the human resources in 2015. The estimated demand and supply were calculated with the Microsoft Excel® program (Microsoft, Redmond, WA). Results The demand for radiation oncologists is estimated to be 161 in 2015 and about 4.9 radiation oncologists will be in demand annually. In contrast, an average of 15 new radiation oncologists will be supplied annually so that the accumulated surplus of radiation oncologists until 2015 is estimated to be 74.1. The main reason for the surplus comes from the discrepancy between the increased number of radiation therapy patients and the need for radiation oncologists. When there is an increase of 1,000 radiation therapy patients, the demand for radiation oncologists increases only by 2.4. This phenomenon is especially evident in the top 10 hospitals where the average number of radiation therapy patients per radiation oncologist is 341, which is 58% higher than the average number (215) of other 46 hospitals. Conclusion To prevent a surplus and to maintain the quality of management, the number of radiation therapy patients per radiation oncologist should be limited. Furthermore, coordinate control of the number of residency positions should be seriously considered. PMID:19771261
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat; Abdullah, Siti Rohana Goh
2014-07-01
Many average methods are available to aggregate a set of numbers to become single number. However these methods do not consider the interdependencies between the criteria of the related numbers. This paper is highlighting the Choquet Integral method as an alternative aggregation method where the interdependency estimates between the criteria are comprised in the aggregation process. The interdependency values can be estimated by using lambda fuzzy measure method. By considering the interdependencies or interaction between the criteria, the resulted aggregated values are more meaningful as compared to the ones obtained by normal average methods. The application of the Choquet Integral is illustrated in a case study of finding the overall academic achievement of year six pupils in a selected primary school in a northern state of Malaysia.
NASA Astrophysics Data System (ADS)
Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue
2015-09-01
Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.
75 FR 16896 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... Submission: On occasion; recordkeeping. Average time CFR section Respondent universe Total annual per...: Businesses. Frequency of Submission: On occasion. Respondent Universe: 728 railroads. Total Estimated... standard. Form Number(s): N/A. Affected Public: Businesses. Respondent Universe: 728 railroads. Frequency...
Inheritance of height and maturity in crosses between pearl millet landraces and inbred Tift 85DB.
Wilson, J P; Burton, G W; Bondari, K
1990-11-01
Over 300 landraces of pearl millet were collected in Burkina Faso and grown at the Coastal Plain Experiment Station in Tifton/GA. At Tifton, these landraces are predominantly tall and late-maturing. The photoperiod requirements of these landraces hinder evaluation of their performance in the field and their use in breeding programs. A conversion program has been initiated to transfer genes for dwarf stature and early flowering into the tall, late-maturing landraces. The inbred Tift 85DB is being used as a donor of genes for the dwarf and early characteristics, and was crossed to nine randomly selected landraces from Burkina Faso. The parents, F1, F2, and backcrosses to each parent were grown in the field and evaluated for plant height at anthesis and time in days from planting to anthesis. In general, plant height of F1s was taller than the tallest parent, and in all crosses the maturity of F1s was intermediate between the parents. Numbers of loci conferring height varied among crosses, ranging from 0 to 9.6, and averaged 1.6. Estimated numbers of loci conferring maturity ranged from 0 to 12.8 and averaged 3.4. Broad-sense heritability estimates for height and maturity averaged 60.2 and 65.7%, respectively. Corresponding narrow-sense estimates averaged 23.8 and 48.2%. Joint scaling tests revealed that additive-genetic effects were highly significant for both traits, but dominance and epistatic-genetic effects contributed to the inheritance of each trait in some crosses. The low gene numbers, high heritability estimates, and preponderance of additive-genetic effects suggest that selection for these traits should be effective.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Estimating the waiting time of multi-priority emergency patients with downstream blocking.
Lin, Di; Patrick, Jonathan; Labeau, Fabrice
2014-03-01
To characterize the coupling effect between patient flow to access the emergency department (ED) and that to access the inpatient unit (IU), we develop a model with two connected queues: one upstream queue for the patient flow to access the ED and one downstream queue for the patient flow to access the IU. Building on this patient flow model, we employ queueing theory to estimate the average waiting time across patients. Using priority specific wait time targets, we further estimate the necessary number of ED and IU resources. Finally, we investigate how an alternative way of accessing ED (Fast Track) impacts the average waiting time of patients as well as the necessary number of ED/IU resources. This model as well as the analysis on patient flow can help the designer or manager of a hospital make decisions on the allocation of ED/IU resources in a hospital.
Estimation of Population Number via Light Activities on Night-Time Satellite Images
NASA Astrophysics Data System (ADS)
Turan, M. K.; Yücer, E.; Sehirli, E.; Karaş, İ. R.
2017-11-01
Estimation and accurate assessment regarding population gets harder and harder day by day due to growth of world population in a fast manner. Estimating tendencies to settlements in cities and countries, socio-cultural development and population numbers is quite difficult. In addition to them, selection and analysis of parameters such as time, work-force and cost seems like another difficult issue. In this study, population number is guessed by evaluating light activities in İstanbul via night-time images of Turkey. By evaluating light activities between 2000 and 2010, average population per pixel is obtained. Hence, it is used to estimate population numbers in 2011, 2012 and 2013. Mean errors are concluded as 4.14 % for 2011, 3.74 % for 2012 and 3.04 % for 2013 separately. As a result of developed thresholding method, mean error is concluded as 3.64 % to estimate population number in İstanbul for next three years.
Boris Zeide
2004-01-01
Estimation of stand density is based on a relationship between number of trees and their average diameter in fully stocked stands. Popular measures of density (Reinekeâs stand density index and basal area) assume that number of trees decreases as a power function of diameter. Actually, number of trees drops faster than predicted by the power function because the number...
Sun, Q H; Wang, W T; Wang, Y W; Li, T T
2018-04-06
Objective: To estimate future excess mortality attributable to cold spells in Guangzhou, China. Methods: We collected the mortality data and metrological data from 2009-2013 of Guangzhou to calculated the association between cold spell days and non-accidental mortality with GLM model. Then we projected future daily average temperatures (2020-2039 (2020s) , 2050-2069 (2050s) , 2080-2099 (2080s) ) with 5 GCMs models and 2 RCPs (RCP4.5 and RCP8.5) to identify cold spell days. The baseline period was the 1980s (1980-1999). Finally, calculated the yearly cold spells related excess death of 1980s, 2020s, 2050s, and 2080s with average daily death count of non-cold spell days, exposure-response relationship, and yearly number of cold spell days. Results: The average of daily non-accidental mortality in Guangzhou from 2009 to 2013 was 96, and the average of daily average was 22.0 ℃. Cold spell days were associated with 3.3% (95% CI: 0.4%-6.2%) increase in non-accidental mortality. In 1980s, yearly cold spells related deaths were 34 (95% CI: 4-64). In 2020s, the number will increase by 0-10; in 2050s, the number will increase by 1-9; and in 2080s, will increase by 1-9 under the RCP4.5 scenario. In 2020s, the number will increase by 0-9; in 2050s, the number will increase by 1-6; and in 2080s, will increase by 0-11 under the RCP8.5 scenario. Conclusion: The cold spells related non-accidental deaths in Guangzhou will increase in future under climate change.
76 FR 20990 - Submission for OMB review; comment request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... and unearned income information reported to IRS by employers and financial institutions. The IRS 1099 information is used to locate noncustodial parents and to verify income and employment. Respondents: Annual Burden Estimates Number of Average Instrument Number of responses per burden hours Total burden...
A data-driven model for estimating industry average numbers of hospital security staff.
Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M
2015-01-01
In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
40 CFR 60.493 - Performance test and compliance provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... equivalent or alternative method. The owner or operator shall determine from company records the volume of... estimate the volume of coating used at each facility by using the average dry weight of coating, number of... acceptable to the Administrator. (i) Calculate the volume-weighted average of the total mass of VOC per...
Estimating watershed level nonagricultural pesticide use from golf courses using geospatial methods
Fox, G.A.; Thelin, G.P.; Sabbagh, G.J.; Fuchs, J.W.; Kelly, I.D.
2008-01-01
Limited information exists on pesticide use for nonagricultural purposes, making it difficult to estimate pesticide loadings from nonagricultural sources to surface water and to conduct environmental risk assessments. A method was developed to estimate the amount of pesticide use on recreational turf grasses, specifically golf course turf grasses, for watersheds located throughout the conterminous United States (U.S.). The approach estimates pesticide use: (1) based on the area of recreational turf grasses (used as a surrogate for turf associated with golf courses) within the watershed, which was derived from maps of land cover, and (2) from data on the location and average treatable area of golf courses. The area of golf course turf grasses determined from these two methods was used to calculate the percentage of each watershed planted in golf course turf grass (percent crop area, or PCA). Turf-grass PCAs derived from the two methods were used with recommended application rates provided on pesticide labels to estimate total pesticide use on recreational turf within 1,606 watersheds associated with surface-water sources of drinking water. These pesticide use estimates made from label rates and PCAs were compared to use estimates from industry sales data on the amount of each pesticide sold for use within the watershed. The PCAs derived from the land-cover data had an average value of 0.4% of a watershed with minimum of 0.01% and a maximum of 9.8%, whereas the PCA values that are based on the number of golf courses in a watershed had an average of 0.3% of a watershed with a minimum of <0.01% and a maximum of 14.2%. Both the land-cover method and the number of golf courses method produced similar PCA distributions, suggesting that either technique may be used to provide a PCA estimate for recreational turf. The average and maximum PCAs generally correlated to watershed size, with the highest PCAs estimated for small watersheds. Using watershed specific PCAs, combined with label rates, resulted in greater than two orders of magnitude over-estimation of the pesticide use compared to estimates from sales data. ?? 2008 American Water Resources Association.
König, S; Tsehay, F; Sitzenstock, F; von Borstel, U U; Schmutz, M; Preisinger, R; Simianer, H
2010-04-01
Due to consistent increases of inbreeding of on average 0.95% per generation in layer populations, selection tools should consider both genetic gain and genetic relationships in the long term. The optimum genetic contribution theory using official estimated breeding values for egg production was applied for 3 different lines of a layer breeding program to find the optimal allocations of hens and sires. Constraints in different scenarios encompassed restrictions related to additive genetic relationships, the increase of inbreeding, the number of selected sires and hens, and the number of selected offspring per mating. All these constraints enabled higher genetic gain up to 10.9% at the same level of additive genetic relationships or in lower relationships at the same gain when compared with conventional selection schemes ignoring relationships. Increases of inbreeding and genetic gain were associated with the number of selected sires. For the lowest level of the allowed average relationship at 10%, the optimal number of sires was 70 and the estimated breeding value for egg production of the selected group was 127.9. At the highest relationship constraint (16%), the optimal number of sires decreased to 15, and the average genetic value increased to 139.7. Contributions from selected sires and hens were used to develop specific mating plans to minimize inbreeding in the following generation by applying a simulated annealing algorithm. The additional reduction of average additive genetic relationships for matings was up to 44.9%. An innovative deterministic approach to estimate kinship coefficients between and within defined selection groups based on gene flow theory was applied to compare increases of inbreeding from random matings with layer populations undergoing selection. Large differences in rates of inbreeding were found, and they underline the necessity to establish selection tools controlling long-term relationships. Furthermore, it was suggested to use optimum genetic contribution theory for conservation schemes or, for example, the experimental line in our study.
Using state-issued identification cards for obesity tracking.
Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P
2015-01-01
Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-02
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method. Published by Elsevier B.V.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
76 FR 62072 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... Evaluation--Design Survey. OMB No.: New collection. Description: The Family and Youth Services Bureau. (HHS... Organizations. Annual Burden Estimates Design survey Number of Average burden Instrument Annual number responses... Total for Design 92 Survey Additional Information: Copies of the proposed collection may be obtained by...
Examining Play Counts and Measurements of Injury Incidence in Youth Football.
Kerr, Zachary Y; Yeargin, Susan W; Djoko, Aristarque; Dalton, Sara L; Baker, Melissa M; Dompier, Thomas P
2017-10-01
Whereas researchers have provided estimates for the number of head impacts sustained within a youth football season, less is known about the number of plays across which such impact exposure occurs. To estimate the number of plays in which youth football players participated during the 2013 season and to estimate injury incidence through play-based injury rates. Descriptive epidemiology study. Youth football. Youth football players (N = 2098; age range, 5-15 years) from 105 teams in 12 recreational leagues across 6 states. We calculated the average number of athlete-plays per season and per game using independent-samples t tests to compare age groups (5-10 years old versus 11-15 years old) and squad sizes (<20 versus ≥20 players); game injury rates per 1000 athlete-exposures (AEs) and per 10 000 athlete-plays; and injury rate ratios (IRRs) with 95% confidence intervals (CIs) to compare age groups. On average, youth football players participated in 333.9 ± 178.5 plays per season and 43.9 ± 24.0 plays per game. Age groups (5- to 10-year-olds versus 11- to 15-year-olds) did not differ in the average number of plays per season (335.8 versus 332.3, respectively; t 2086.4 = 0.45, P = .65) or per game (44.1 versus 43.7, respectively; t 2092.3 = 0.38, P = .71). However, players from smaller teams participated in more plays per season (373.7 versus 308.0; t 1611.4 = 8.15, P < .001) and per game (47.7 versus 41.4; t 1523.5 = 5.67, P < .001). Older players had a greater game injury rate than younger players when injury rates were calculated per 1000 AEs (23.03 versus 17.86/1000 AEs; IRR = 1.29; 95% CI = 1.04, 1.60) or per 10 000 athlete-plays (5.30 versus 4.18/10 000 athlete-plays; IRR = 1.27; 95% CI = 1.02, 1.57). A larger squad size was associated with a lower average number of plays per season and per game. Increasing youth football squad sizes may help reduce head-impact exposure for individual players. The AE-based injury rates yielded effect estimates similar to those of play-based injury rates.
Testing the Wisconsin Phosphorus Index with year-round, field-scale runoff monitoring.
Good, Laura W; Vadas, Peter; Panuska, John C; Bonilla, Carlos A; Jokela, William E
2012-01-01
The Wisconsin Phosphorus Index (WPI) is one of several P indices in the United States that use equations to describe actual P loss processes. Although for nutrient management planning the WPI is reported as a dimensionless whole number, it is calculated as average annual dissolved P (DP) and particulate P (PP) mass delivered per unit area. The WPI calculations use soil P concentration, applied manure and fertilizer P, and estimates of average annual erosion and average annual runoff. We compared WPI estimated P losses to annual P loads measured in surface runoff from 86 field-years on crop fields and pastures. As the erosion and runoff generated by the weather in the monitoring years varied substantially from the average annual estimates used in the WPI, the WPI and measured loads were not well correlated. However, when measured runoff and erosion were used in the WPI field loss calculations, the WPI accurately estimated annual total P loads with a Nash-Sutcliffe Model Efficiency (NSE) of 0.87. The DP loss estimates were not as close to measured values (NSE = 0.40) as the PP loss estimates (NSE = 0.89). Some errors in estimating DP losses may be unavoidable due to uncertainties in estimating on-farm manure P application rates. The WPI is sensitive to field management that affects its erosion and runoff estimates. Provided that the WPI methods for estimating average annual erosion and runoff are accurately reflecting the effects of management, the WPI is an accurate field-level assessment tool for managing runoff P losses. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Crampton, Lisa H.; Brinck, Kevin W.; Pias, Kyle E.; Heindl, Barbara A. P.; Savre, Thomas; Diegmann, Julia S.; Paxton, Eben H.
2017-01-01
Accurate estimates of the distribution and abundance of endangered species are crucial to determine their status and plan recovery options, but such estimates are often difficult to obtain for species with low detection probabilities or that occur in inaccessible habitats. The Puaiohi (Myadestes palmeri) is a cryptic species endemic to Kauaʻi, Hawai‘i, and restricted to high elevation ravines that are largely inaccessible. To improve current population estimates, we developed an approach to model distribution and abundance of Puaiohi across their range by linking occupancy surveys to habitat characteristics, territory density, and landscape attributes. Occupancy per station ranged from 0.17 to 0.82, and was best predicted by the number and vertical extent of cliffs, cliff slope, stream width, and elevation. To link occupancy estimates with abundance, we used territory mapping data to estimate the average number of territories per survey station (0.44 and 0.66 territories per station in low and high occupancy streams, respectively), and the average number of individuals per territory (1.9). We then modeled Puaiohi occupancy as a function of two remote-sensed measures of habitat (stream sinuosity and elevation) to predict occupancy across its entire range. We combined predicted occupancy with estimates of birds per station to produce a global population estimate of 494 (95% CI 414–580) individuals. Our approach is a model for using multiple independent sources of information to accurately track population trends, and we discuss future directions for modeling abundance of this, and other, rare species.
A variant of sparse partial least squares for variable selection and data exploration.
Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina
2014-01-01
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.
The heterogeneity statistic I(2) can be biased in small meta-analyses.
von Hippel, Paul T
2015-04-14
Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).
NASA Technical Reports Server (NTRS)
Smith, J. H.
1980-01-01
Average hourly and daily total insolation estimates for 235 United States locations are presented. Values are presented for a selected number of array tilt angles on a monthly basis. All units are in kilowatt hours per square meter.
Capture-recapture to estimate the number of street children in a city in Brazil
Gurgel, R; da Fonseca, J D C; Neyra-Castaneda, D; Gill, G; Cuevas, L
2004-01-01
Background: Street children are an increasing problem in Latin America. It is however difficult to estimate the number of children in the street as this is a highly mobile population. Aims: To estimate the number of street children in Aracaju, northeast Brazil, and describe the characteristics of this population. Methods: Three independent lists of street children were constructed from a non-governmental organisation and cross-sectional surveys. The number of street children was estimated using the capture-recapture method. The characteristics of the children were recorded during the surveys. Results: The estimated number of street children was 1456. The estimated number of street children before these surveys was 526, although non-official estimates suggested that there was a much larger population. Most street children are male, maintain contact with their families, and are attending school. Children contribute to the family budget a weekly average of R$21.2 (£4.25, €6.0, US$7.5) for boys and R$17.7 (£3.55, €5.0, US$6.3) for girls. Conclusion: Street children of Aracaju have similar characteristics to street children from other cities in Brazil. The capture-recapture method could be a useful method to estimate the size of this highly mobile population. The major advantage of the method is its reproducibility, which makes it more acceptable than estimates from interested parties. PMID:14977695
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... respondent is $3,012. \\1\\ Number of hours an employee works in a year. \\2\\ Average annual salary per employee... bases the cost estimate for respondents upon salaries within the Commission for professional and clerical support. This cost estimate includes respondents' total salary and employment benefits. Comments...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-29
... under the Resource Conservation and recovery Act (RCRA); employees working at routine hazardous waste... 10.69 hours per response. Burden means the total time, effort, or financial resources expended by...: Annually. Estimated Total Average Number of Responses for Each Respondent: 1. Estimated Total Annual Hour...
77 FR 49476 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-16
... approximately 10,000 respondents will submit fingerprint cards each year. It also estimates that each respondent will submit 55 fingerprint cards per year. The staff estimates that the average number of hours necessary to comply with Rule 17f-2(a) by completing a fingerprint card is one-half hour. Thus, the total...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
...: Class I Motor Carriers of Passengers. Estimated Number of Respondents: 2 (per year). Estimated Time per Response: 18 minutes per response. Expiration Date: September 30, 2012. Frequency of Response: Annually and..., passenger carriers are classified into two groups: (1) Class I carriers are those having average annual...
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Dynamics of a black-capped chickadee population, 1958-1983
Loery, G.; Nichols, J.D.
1985-01-01
The dynamics of a wintering population of Black-capped Chickadees (Parus atricapillus) were studied from 1958-1983 using capture-recapture methods. The Jolly-Seber model was used to obtain annual estimates of population size, survival rate, and recruitment. The average estimated population size over this period was ?160 birds. The average estimated number of new birds entering the population each year and alive at the time of sampling was ?57. The arithmetic mean annual survival rate estimate was ?0.59. We tested hypothesis about possible relationships between these population parameters and (1) the natural introduction of Tufted Titmice (Parus bicolor) to the area, (2) the clear-cutting of portions of nearby red pine (Pinus resinosa) plantations, and (3) natural variations in winter temperatures. The chickadee population exhibited a substantial short-term decline following titmouse establishment, produced by decreases in both survival rate and number of new recruits. Survival rate decline somewhat after the initiation of the pine clear-cutting, but population size was very similar before and after clear-cutting. Weighted least squares analyses provided no evidence of a relationship between survival rate and either of two winter temperature variables.
ESTIMATION OF THE NUMBER OF INFECTIOUS BACTERIAL OR VIRAL PARTICLES BY THE DILUTION METHOD
Seligman, Stephen J.; Mickey, M. Ray
1964-01-01
Seligman, Stephen J. (University of California, Los Angeles), and M. Ray Mickey. Estimation of the number of infectious bacterial or viral particles by the dilution method. J. Bacteriol. 88:31–36. 1964.—For viral or bacterial systems in which discrete foci of infection are not obtainable, it is possible to obtain an estimate of the number of infectious particles by use of the quantal response if the assay system is such that one infectious particle can elicit the response. Unfortunately, the maximum likelihood estimate is difficult to calculate, but, by the use of a modification of Haldane's approximation, it is possible to construct a table which facilitates calculation of both the average number of infectious particles and its relative error. Additional advantages of the method are that the number of test units per dilution can be varied, the dilutions need not bear any fixed relation to each other, and the one-particle hypothesis can be readily tested. PMID:14197902
Democracy Assistance in the Gulf
2012-04-01
Hesitant Bedfellows: The German Stiftungen and Party Aid in Africa. Working Paper, Coventry, UK: Centre for the Study of Globalisation and...this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) stitute for the Study of
Hidayat, Budi; Pokhrel, Subhash
2010-01-01
We apply several estimators to Indonesian household data to estimate the relationship between health insurance and the number of outpatient visits to public and private providers. Once endogeneity of insurance is taken into account, there is a 63 percent increase in the average number of public visits by the beneficiaries of mandatory insurance for civil servants. Individuals’ decisions to make first contact with private providers is affected by private insurance membership. However, insurance status does not make any difference for the number of future outpatient visits. PMID:20195429
Tied Mixtures in the Lincoln Robust CSR
1989-01-01
reporting burden for the collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching...TITLE AND SUBTITLE Tied Mixtures in the Lincoln Robust CSR 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...ABSTRACT 18. NUMBER OF PAGES 10 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified
Quality of life and use of health care resources among patients with chronic depression
Villoro, Renata; Merino, María; Hidalgo-Vega, Alvaro
2016-01-01
Purpose This study estimates the health-related quality of life and the health care resource utilization of patients diagnosed with chronic depression (CD) in Spain. Patients and methods We used the Spanish National Health Survey 2011–2012, a cross-sectional survey representative at the national level, that selects people aged between 18 and 64 years (n=14,691). We estimated utility indices through the EuroQol five-dimensional descriptive system questionnaire included in the survey. We calculated percentage use of health care resources (medical visits, hospitalizations, emergency services, and drug consumption) and average number of resources used when available. A systematic comparison was made between people diagnosed with CD and other chronic conditions (OCCs). The chi-square test, Mann–Whitney U-test, and Kruskal–Wallis test were used to determine the statistical significance of differences between comparison groups. Multivariate analyses (Poisson regression, logistic regression, and linear regression) were also carried out to assess the relationship between quality of life and consumption of health care resources. Results Approximately, 6.1% of the subjects aged between 18 and 64 years were diagnosed with CD (average age 48.3±11 years, 71.7% females). After controlling for age, sex, and total number of comorbidities, a diagnosis of CD reduced utility scores by 0.09 (P<0.05) vs OCCs, and increased the average number of hospitalizations by 15%, the average number of days at hospital by 51%, and the average number of visits to emergency services by 15% (P<0.05). CD also increased the average number of visits to secondary care by 14% and visits to general practitioners by 4%. People with CD had a higher probability of consuming drugs than people with OCCs (odds ratio [OR]: 1.24, P<0.05), but only 38.6% took antidepressants. Conclusion People with CD had significantly lower health-related quality of life than people with OCCs. CD was associated with increased hospital length of stay and involved a higher consumption of emergency services and drugs than OCCs. PMID:27713651
ERIC Educational Resources Information Center
Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.
2003-01-01
Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)
Conditional estimates of the number of podiform chromite deposits
Singer, D.A.
1994-01-01
A desirable guide for estimating the number of undiscovered mineral deposits is the number of known deposits per unit area from another well-explored permissive terrain. An analysis of the distribution of 805 podiform chromite deposits among ultramafic rocks in 12 subareas of Oregon and 27 counties of California is used to examine and extend this guide. The average number of deposits in this sample of 39 areas is 0.225 deposits per km2 of ultramafic rock; the frequency distribution is significantly skewed to the right. Probabilistic estimates can be made by using the observation that the lognormal distribution fits the distribution of deposits per unit area. A further improvement in the estimates is available by using the relationship between the area of ultramafic rock and the number of deposits. The number (N) of exposed podiform chromite deposits can be estimated by the following relationship: log10(N)=-0.194+0.577 log10(area of ultramafic rock). The slope is significantly different from both 0.0 and 1.0. Because the slope is less than 1.0, the ratio of deposits to area of permissive rock is a biased estimator when the area of ultramafic rock is different from the median 93 km2. Unbiased estimates of the number of podiform chromite deposits can be made with the regression equation and 80 percent confidence limits presented herein. ?? 1994 Oxford University Press.
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Reynolds Number Scaling and Parameterization of Stratified Turbulent Wakes
2017-04-17
orbital shape remains repeatable from one wave cycle to the next. Depth- averaged values of the mean and rms IWB-mean Lagrangian drifts (not shown here...suggest that these drifts will be accompanied by average particle displacements of CJ = (10KxT)112 = 10- 1 .AJ = :::o:O(lOm). Our fmdings on Lagrangian ...of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching data sources, gathering and
Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying
2012-03-01
This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Salgado, María V; Pérez, Adriana; Abad-Vivero, Erika N; Thrasher, James F; Sargent, James D; Mejía, Raúl
2016-04-01
Smoking scenes in movies promote adolescent smoking onset; thus, the analysis of the number of images of smoking in movies really reaching adolescents has become a subject of increasing interest. The aim of this study was to estimate the level of exposure to images of smoking in movies watched by adolescents in Argentina and Mexico. First-year secondary school students from Argentina and Mexico were surveyed. One hundred highest-grossing films from each year of the period 2009-2013 (Argentina) and 2010-2014 (Mexico) were analyzed. Each participant was assigned a random sample of 50 of these movies and was asked if he/she had watched them. The total number of adolescents who had watched each movie in each country was estimated and was multiplied by the number of smoking scenes (occurrences) in each movie to obtain the number of gross smoking impressions seen by secondary school adolescents from each country. Four-hundred and twenty-two movies were analyzed in Argentina and 433 in Mexico. Exposure to more than 500 million smoking impressions was estimated for adolescents in each country, averaging 128 and 121 minutes of smoking scenes seen by each Argentine and Mexican adolescent, respectively. Although 15, 16 and 18-rated movies had more smoking scenes in average, movies rated for younger teenagers were responsible for the highest number of smoking scenes watched by the students (67.3% in Argentina and 54.4% in Mexico) due to their larger audience. At the population level, movies aimed at children are responsible for the highest tobacco burden seen by adolescents.
Borras, Josep M; Grau, Cai; Corral, Julieta; Wong, Karen; Barton, Michael B; Ferlay, Jacques; Bray, Freddie; Lievens, Yolande
2018-02-01
The optimal number of radiotherapy fractions is a relevant input for planning resource needs. An estimation of the total number of fractions by country and tumour site is assessed for 2012 and 2025. European cancer incidence data by tumour site and country for 2012 and 2025 were extracted from the GLOBOCAN database. Incidence and stage data were introduced in the Australian Collaboration for Cancer Outcomes Research and Evaluation (CCORE) model, producing an evidence-based proportion of incident cases with an indication for radiotherapy and fractions by indication. An indication was defined as a clinical situation in which radiotherapy was the treatment of choice. The total number of fractions if radiotherapy was given according to guidelines to all patients with an indication in Europe was estimated to be 30 million for 2012; with a forecasted increase of 16.1% by 2025, yet with differences by country and tumour. The average number of fractions per course was 17.6 with a small range of differences following stage at diagnosis. Among the treatments with radical intent the average was 24 fractions, while it decreased to 2.5 among palliative treatments. An increase in the total number of fractions is expected in many European countries in the coming years following the trends in cancer incidence. In planning radiotherapy resources, these increases should be balanced to the evolution towards hypofractionation, along with increased complexity and quality assurance. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.
2012-01-01
As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.
77 FR 45639 - Prescription Drug User Fee Rates for Fiscal Year 2013
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... compensation and benefits (PC&B) paid per full-time equivalent position (FTE) at FDA for the first 3 of the 4... preceding FY 2013. The 3 year average is 2.17 percent. Table 1--FDA Personnel Compensation and Benefits (PC... 108.25 122.3 The FY 2013 application fee is estimated by dividing the average number of full...
76 FR 53910 - Fee for Using a Priority Review Voucher in Fiscal Year 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
... ordinarily takes 10 months into 6 months, OPP estimates that a multiplier of 1.67 (10 months divided by 6... FY 2010. Dividing $6,856,000 (rounded to the nearest thousand dollars) by 13 (the total number of... adjust the FY 2010 cost figure above by the average amount by which FDA's average salary and benefit...
Using Propensity Score Matching Methods to Improve Generalization from Randomized Experiments
ERIC Educational Resources Information Center
Tipton, Elizabeth
2011-01-01
The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-05
... 4079, Request for Determination of Possible Loss of United States Citizenship, (No. 1405-0178) ACTION... Information Collection: Request for Determination of Possible Loss of United States Citizenship. OMB Control... Number of Respondents: 1,132. Estimated Number of Responses: 1,132. Average Hours Per Response: 15...
The Changing Recreational Use of the Boundary Waters Canoe Area
Robert C. Lucas
1967-01-01
Although data on use for 1961 and 1966 are not always comparable, a bare-minimum estimate of the increase in number of visitors between those years in 19 percent. The greatest increase was in number of canoeists and boaters, which rose on the average 9 or 10 percent a year.
Lopez, Anna Lena; You, Young Ae; Kim, Young Eun; Sah, Binod; Maskery, Brian; Clemens, John
2012-01-01
Abstract Objective To estimate the global burden of cholera using population-based incidence data and reports. Methods Countries with a recent history of cholera were classified as endemic or non-endemic, depending on whether they had reported cholera cases in at least three of the five most recent years. The percentages of the population in each country that lacked access to improved sanitation were used to compute the populations at risk for cholera, and incidence rates from published studies were applied to groups of countries to estimate the annual number of cholera cases in endemic countries. The estimates of cholera cases in non-endemic countries were based on the average numbers of cases reported from 2000 to 2008. Literature-based estimates of cholera case-fatality rates (CFRs) were used to compute the variance-weighted average cholera CFRs for estimating the number of cholera deaths. Findings About 1.4 billion people are at risk for cholera in endemic countries. An estimated 2.8 million cholera cases occur annually in such countries (uncertainty range: 1.4–4.3) and an estimated 87 000 cholera cases occur in non-endemic countries. The incidence is estimated to be greatest in children less than 5 years of age. Every year about 91 000 people (uncertainty range: 28 000 to 142 000) die of cholera in endemic countries and 2500 people die of the disease in non-endemic countries. Conclusion The global burden of cholera, as determined through a systematic review with clearly stated assumptions, is high. The findings of this study provide a contemporary basis for planning public health interventions to control cholera. PMID:22461716
Relationship between road traffic accidents and conflicts recorded by drive recorders.
Lu, Guangquan; Cheng, Bo; Kuzumaki, Seigo; Mei, Bingsong
2011-08-01
Road traffic conflicts can be used to estimate the probability of accident occurrence, assess road safety, or evaluate road safety programs if the relationship between road traffic accidents and conflicts is known. To this end, we propose a model for the relationship between road traffic accidents and conflicts recorded by drive recorders (DRs). DRs were installed in 50 cars in Beijing to collect records of traffic conflicts. Data containing 1366 conflicts were collected in 193 days. The hourly distributions of conflicts and accidents were used to model the relationship between accidents and conflicts. To eliminate time series and base number effects, we defined and used 2 parameters: average annual number of accidents per 10,000 vehicles per hour and average number of conflicts per 10,000 vehicles per hour. A model was developed to describe the relationship between the two parameters. If A(i) = average annual number of accidents per 10,000 vehicles per hour at hour i, and E(i) = average number of conflicts per 10,000 vehicles per hour at hour i, the relationship can be expressed as [Formula in text] (α>0, β>0). The average number of traffic accidents increases as the number of conflicts rises, but the rate of increase decelerates as the number of conflicts increases further. The proposed model can describe the relationship between road traffic accidents and conflicts in a simple manner. According to our analysis, the model fits the present data.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-05
... INFORMATION CONTACT: Crystal Rennie, Enterprise Records Service (005R1B), Department of Veterans Affairs, 810 Vermont Avenue NW, Washington, DC 20420, (202) 461-7485 or email crystal[email protected]va.gov . Please refer to.... Estimated Average Burden per Respondent: 20 minutes. Frequency of Response: One time. Estimated Number of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-19
... INFORMATION CONTACT: Crystal Rennie, Enterprise Records Service (005R1B), Department of Veterans Affairs, 810 Vermont Avenue NW., Washington, DC 20420, (202) 632-7492, fax (202) 632-7583 or email crystal[email protected] Estimated Average Burden per Respondent: 15 minutes. Frequency of Response: One time. Estimated Number of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... landline and/or a personal cell phone. Estimated Number of Respondents--30 pretest respondents, 6,000... a total of 12,460 respondents. Estimated Time per Response--20 minutes per pretest and main survey... survey will be preceded by a pretest administered to 30 respondents. Interview length will average 20...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... information. \\5\\ Estimated number of hours an employee works each year = 2080, estimated average annual cost..., along, from, or in any of the streams or other bodies of water over which Congress has jurisdiction... water or water power from any Government dam. FERC-515: The information collected under the requirements...
System-theoretic Interpretation of the Mode Sensing Hypothesis
2014-08-01
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...6. AUTHOR(S) Prof. Rafal Zbikowski Prof. Graham Taylor 5d. PROJECT NUMBER 5d. TASK NUMBER 5e. WORK UNIT NUMBER 7. PERFORMING...insect flight control has more recently been modelled. The approach that we adopt here differs from previous, related work in several fundamental
2008-01-01
BE POSTED ON THE WORLD -WIDE WEB: II SOCIAL SECURITY ACCOUNT NUMBERS 2) DATES OF BIRTH 3) HOME ADDRESSES 4) TELEPHONE NUMBERS OTHER THAN NUMBERS OF ... eosinophilia is seen during the vesicular stage of incontinentia pigmenti (Carney 1976). The presence of cone-shaped teeth, nail dysplasia, patchy...REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1
Transitions: Managing the Transfer of Security Responsibility
2010-02-05
Index 1.2_Transitions-ConceptNote_v2.0_draft Index Transitions: Managing the Transfer of Security Responsibility A Concept Paper...reporting burden for the collection of information is estimated to average 1 hour per response , including the time for reviewing instructions...TITLE AND SUBTITLE Transitions: Managing the Transfer of Security Responsibility 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
Associations between job burnout and self-efficacy: a meta-analysis.
Shoji, Kotaro; Cieslak, Roman; Smoktunowicz, Ewelina; Rogala, Anna; Benight, Charles C; Luszczynska, Aleksandra
2016-07-01
This study aimed at systematically reviewing and meta-analyzing the strength of associations between self-efficacy and job burnout (the global index and its components). We investigated whether these associations would be moderated by: (a) the type of measurement of burnout and self-efficacy, (b) the type of occupation, (c) the number of years of work experience and age, and (d) culture. We systematically reviewed and analyzed 57 original studies (N = 22,773) conducted among teachers (k = 29), health-care providers (k = 17), and other professionals (k = 11). The average effect size estimate for the association between self-efficacy and burnout was of medium size (-.33). Regarding the three burnout components, the largest estimate of the average effect (-.49) was found for the lack of accomplishment. The estimates of the average effect were similar, regardless of the type of measures of burnout and self-efficacy measurement (general vs. context-specific). Significantly larger estimates of the average effects were found among teachers (compared to health-care providers), older workers, and those with longer work experience. Significant self-efficacy-burnout relationships were observed across countries, although the strength of associations varied across burnout components, participants' profession, and their age.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Short-term Lost Productivity per Victim: Intimate Partner Violence, Sexual Violence, or Stalking.
Peterson, Cora; Liu, Yang; Kresnow, Marcie-Jo; Florence, Curtis; Merrick, Melissa T; DeGue, Sarah; Lokey, Colby N
2018-07-01
The purpose of this study is to estimate victims' lifetime short-term lost productivity because of intimate partner violence, sexual violence, or stalking. U.S. nationally representative data from the 2012 National Intimate Partner and Sexual Violence Survey were used to estimate a regression-adjusted average per victim (female and male) and total population number of cumulative short-term lost work and school days (or lost productivity) because of victimizations over victims' lifetimes. Victims' lost productivity was valued using a U.S. daily production estimate. Analysis was conducted in 2017. Non-institutionalized adults with some lifetime exposure to intimate partner violence, sexual violence, or stalking (n=6,718 respondents; survey-weighted n=130,795,789) reported nearly 741 million lost productive days because of victimizations by an average of 2.5 perpetrators per victim. The adjusted per victim average was 4.9 (95% CI=3.9, 5.9) days, controlling for victim, perpetrator, and violence type factors. The estimated societal cost of this short-term lost productivity was $730 per victim, or $110 billion across the lifetimes of all victims (2016 USD). Factors associated with victims having a higher number of lost days included a higher number of perpetrators and being female, as well as sexual violence, physical violence, or stalking victimization by an intimate partner perpetrator, stalking victimization by an acquaintance perpetrator, and sexual violence or stalking victimization by a family member perpetrator. Short-term lost productivity represents a minimum economic valuation of the immediate negative effects of intimate partner violence, sexual violence, and stalking. Victims' lost productivity affects family members, colleagues, and employers. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Wierzbicka, A.; Bohgard, M.; Pagels, J. H.; Dahl, A.; Löndahl, J.; Hussein, T.; Swietlicki, E.; Gudmundsson, A.
2015-04-01
For the assessment of personal exposure, information about the concentration of pollutants when people are in given indoor environments (occupancy time) are of prime importance. However this kind of data frequently is not reported. The aim of this study was to assess differences in particle characteristics between occupancy time and the total monitoring period, with the latter being the most frequently used averaging time in the published data. Seven indoor environments were selected in Sweden and Finland: an apartment, two houses, two schools, a supermarket, and a restaurant. They were assessed for particle number and mass concentrations and number size distributions. The measurements using a Scanning Mobility Particle Sizer and two photometers were conducted for seven consecutive days during winter in each location. Particle concentrations in residences and schools were, as expected, the highest during occupancy time. In the apartment average and median PM2.5 mass concentrations during the occupancy time were 29% and 17% higher, respectively compared to total monitoring period. In both schools, the average and medium values of the PM2.5 mass concentrations were on average higher during teaching hours compared to the total monitoring period by 16% and 32%, respectively. When it comes to particle number concentrations (PNC), in the apartment during occupancy, the average and median values were 33% and 58% higher, respectively than during the total monitoring period. In both houses and schools the average and median PNC were similar for the occupancy and total monitoring periods. General conclusions on the basis of measurements in the limited number of indoor environments cannot be drawn. However the results confirm a strong dependence on type and frequency of indoor activities that generate particles and site specificity. The results also indicate that the exclusion of data series during non-occupancy periods can improve the estimates of particle concentrations and characteristics suitable for exposure assessment, which is crucial for estimating health effects in epidemiological and toxicological studies.
Red fox predation on breeding ducks in midcontinent North America
Sargeant, Alan B.; Allen, Stephen H.; Eberhardt, Robert T.
1984-01-01
Red fox (Vulpes vulpes) predation on nesting ducks was assessed by examining 1,857 adult duck remains found at 1,432 fox rearing dens from 1968 to 1973. Dabbling ducks were much more vulnerable to foxes than diving ducks. Dabbling ducks (1,798) found at dens consisted of 27% blue-winged teals (Anas discors), 23% mallards (A. platyrhynchos), 20% northern pintails (A. acuta), 9% northern shovelers (Spatula clypeata), 8% gadwalls (A. strepera), 3% green-winged teals (A. crecca), 2% American wigeons (A. americana), and 10% unidentified. Relative abundance of individual species and nesting chronology were the most important factors affecting composition of ducks taken by foxes. Seventy-six percent of 1,376 adult dabbling ducks and 40% of 30 adult diving ducks for which sex was determined were hens. In western North Dakota and western South Dakota, 65% of mallard and northern pintail remains found at dens were hens compared with 76% in eastern North Dakota and eastern South Dakota (P < 0.05). Percentage hens varied among the 5 most common dabbling ducks found at dens. In eastern North Dakota and eastern South Dakota, where predation on ducks was greatest, an average of 64% of gadwall, 73% of northern pintail, 81% of blue-winged teal, 81% of mallard, and 90% of northern shoveler remains found at dens were hens. Percentage hens among duck remains found at dens increased as the duck nesting season progressed. Numbers of adult ducks found at individual dens ranged from 0 to 67. The average number of ducks found in and around den entrances was used as an index of fox predation rates on ducks. Predation rate indices ranged from 0.01 duck/den in Iowa to 1.80 ducks/den in eastern North Dakota. Average annual predation rate indices for dabbling ducks in a 3-county intensive study area in eastern North Dakota were closely correlated with May pond numbers (r = 0.874, P < 0.10) and duck population size (r = 0.930, P < 0.05), but all species were not affected in the same manner or to the same degree. Drought had least effect on populations and predation rate indices of mallards and gadwalls and had greatest effect on those of northern pintails and northern shovelers. Hens of early nesting species were more vulnerable to foxes than hens of late nesting species. Predation rate indices were expanded to estimate total numbers of ducks taken by fox families during the denning season. Estimated numbers of dabbling ducks taken annually by individual fox families in 2 physiographic regions comprising the intensive study area ranged from 16.1 to 65.9. Predation was highest during wet years and lowest during dry years and averaged lower, but was more variable, in the region where tillage was greatest and wetland water levels were least stable. Predation in the intensive study area averaged 2.97 adult dabbling ducks/ km2/year and represented an estimated average annual loss of 13.5% of hen and 4.5% of drake populations in that area. Of 5,402 individual food items found at dens in the intensive study area, 24% were adult ducks. Ducks made up an estimated maximum average of 16% of the prey biomass required by fox families during the denning season. The average annual take of adult ducks by foxes in the midcontinent area was estimated to be about 900,000. This estimate included both scavenged and fox-killed ducks, as well as ducks taken after the denning season. Fox impact on midcontinent ducks was greatest in eastern North Dakota where both fox and duck densities were relatively high. Predation in that area was likely increased by environmental factors, especially intensive agriculture that concentrated nesting and reduced prey abundance. Predation by red foxes and other predators severely reduces duck production in the midcontinent area. Effective management to increase waterfowl production will necessitate coping with or reducing high levels of predation.
Sheffield, L.M.; Gall, Adrian E.; Roby, D.D.; Irons, D.B.; Dugger, K.M.
2006-01-01
Least Auklets (Aethia pusilla (Pallas, 1811)) are the most abundant species of seabird in the Bering Sea and offer a relatively efficient means of monitoring secondary productivity in the marine environment. Counting auklets on surface plots is the primary method used to track changes in numbers of these crevice-nesters, but counts can be highly variable and may not be representative of the number of nesting individuals. We compared average maximum counts of Least Auklets on surface plots with density estimates based on mark–resight data at a colony on St. Lawrence Island, Alaska, during 2001–2004. Estimates of breeding auklet abundance from mark–resight averaged 8 times greater than those from maximum surface counts. Our results also indicate that average maximum surface counts are poor indicators of breeding auklet abundance and do not vary consistently with auklet nesting density across the breeding colony. Estimates of Least Auklet abundance from mark–resight were sufficiently precise to meet management goals for tracking changes in seabird populations. We recommend establishing multiple permanent banding plots for mark–resight studies on colonies selected for intensive long-term monitoring. Mark–resight is more likely to detect biologically significant changes in size of auklet breeding colonies than traditional surface count techniques.
Sizing the cannabis market: a demand-side and user-specific approach in seven European countries.
van Laar, Margriet; Frijns, Tom; Trautmann, Franz; Lombi, Linda
2013-06-01
Demand-based estimates of total cannabis consumption rarely consider differences among different user types and variation across countries. To describe cannabis consumption patterns and estimate annual consumption for different user types across EU Member States, a web survey in Bulgaria, Czech Republic, Italy, the Netherlands, Portugal, Sweden and United Kingdom (England & Wales) collected data on cannabis use patterns from 3,922 persons who had consumed cannabis at least once in the past year. They were classified into four groups based on their number of use days in the past 12 months: infrequent users or chippers (<11 days), occasional users (11-50 days), regular users (51-250 days) and intensive users (>250 days). User type specific data on typical amounts consumed were matched with data on numbers of users per user type estimated from existing population surveys, taking differences in mode of consumption, age and gender into account. Estimates were supplemented with data from populations of problem users to compensate for under coverage. Results showed remarkably consistent differences among user groups across countries. Both the average number of units consumed per typical use day and the average amount of cannabis consumed per unit increased across user types of increasing frequency of use. In all countries except Portugal, intensive users formed the smallest group of cannabis users but were responsible for the largest part of total annual cannabis consumption. Annual cannabis consumption varied across countries but confidence intervals were wide. Results are compared with previous estimates and discussed in the context of improving estimation methods.
Is the Current Royal Australian Air Force an Air Force of Strategic Influence?
2015-02-17
0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME...training in 1991. He is a command pilot with over 6500 flying hours , primarily in AP-3C Orion and Hawk 127 aircraft. He has twice been a Staff
Point counts from clustered populations: Lessons from an experiment with Hawaiian crows
Hayward, G.D.; Kepler, C.B.; Scott, J.M.
1991-01-01
We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.
Variability of dental cone beam CT grey values for density estimations
Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K
2013-01-01
Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537
Develop Efficient Leak Proof M1 Abrams Plenum Seal
2014-05-07
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c... 1 Background ................................................................................................................. 6 1.1 Problem
NATO’s Relevance in the Twenty-First Century
2012-03-22
reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...5d. PROJECT NUMBER Colonel John K. Jones 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...Christopher Coker, Globalisation and Insecurity in the Twenty-first Century: NATO and the Management of Risk (The International Institute for Strategic
Development System for FPGA-Controlled, Portable Processing Systems
2015-12-08
SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING ORGANIZATION REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Maciej Noras ...Maciej Noras 611103 c. THIS PAGE The public reporting burden for this collection of information is estimated to average 1 hour per response...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: National Academy Member Maciej Noras 0.00 Aidan Browne 0.00 0.00 2 PERCENT_SUPPORTEDNAME FTE
Company Reinvesting and Corporate ROI
2011-08-01
Company Reinvesting and Corporate ROI Rob Holder HP-DOJ/DOS Account Executive Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...reporting burden for the collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching...4. TITLE AND SUBTITLE Company Reinvesting and Corporate ROI 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Providência, Rui; Candeias, Rui; Morais, Carlos; Reis, Hipólito; Elvas, Luís; Sanfins, Vitor; Farinha, Sara; Eggington, Simon; Tsintzos, Stelios
2014-05-06
To estimate the short- and long-term financial impact of early referral for implantable loop recorder diagnostic (ILR) versus conventional diagnostic pathway (CDP) in the management of unexplained syncope (US) in the Portuguese National Health Service (PNHS). A Markov model was developed to estimate the expected number of hospital admissions due to US and its respective financial impact in patients implanted with ILR versus CDP. The average cost of a syncope episode admission was estimated based on Portuguese cost data and landmark papers. The financial impact of ILR adoption was estimated for a total of 197 patients with US, based on the number of syncope admissions per year in the PNHS. Sensitivity analysis was performed to take into account the effect of uncertainty in the input parameters (hazard ratio of death; number of syncope events per year; probabilities and unit costs of each diagnostic test; probability of trauma and yield of diagnosis) over three-year and lifetime horizons. The average cost of a syncope event was estimated to be between 1,760€ and 2,800€. Over a lifetime horizon, the total discounted costs of hospital admissions and syncope diagnosis for the entire cohort were 23% lower amongst patients in the ILR group compared with the CDP group (1,204,621€ for ILR, versus 1,571,332€ for CDP). The utilization of ILR leads to an earlier diagnosis and lower number of syncope hospital admissions and investigations, thus allowing significant cost offsets in the Portuguese setting. The result is robust to changes in the input parameter values, and cost savings become more pronounced over time.
2014-05-09
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...with a collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 09 MAY 2014 2. REPORT TYPE 3. DATES...COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE AGOR 28 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
2016 Quantum Science Gordon Research Conference
2018-01-10
PERSON 19b. TELEPHONE NUMBER Jun Ye 611102 c. THIS PAGE The public reporting burden for this collection of information is estimated to average 1 hour...completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215
75 FR 35710 - Extended Carryback of Losses to or From a Consolidated Group
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... proposed regulations, Grid Glyer, (202) 622-7930, concerning submissions of comments, Regina Johnson (202... these proposed regulations is in Sec. Sec. 1.1502-21(b)(3)(ii)(C)(2) and 1.1502-21(b)(3)(ii)(C)(3). The...: 1,000 hours. Estimated average annual burden hours per respondent: 0.25 hours. Estimated number of...
Kubota, Hitoshi; Watanabe, Katsutoshi
2012-01-01
The maintenance of genetic diversity is one of the chief concerns in the captive breeding of endangered species. Using microsatellite and mtDNA markers, we examined the effects of two key variables (parental number and duration of breeding period) on effective population size (N(e) ) and genetic diversity of offspring in an experimental breeding program for the endangered Tokyo bitterling, Tanakia tanago. Average heterozygosity and number of alleles of offspring estimated from microsatellite data increased with parental number in a breeding aquarium, and exhibited higher values for a long breeding period treatment (9 weeks) compared with a short breeding period (3 weeks). Haplotype diversity in mtDNA of offspring decreased with the reduction in parental number, and this tendency was greater for the short breeding period treatment. Genetic estimates of N(e) obtained with two single-sample estimation methods were consistently higher for the long breeding period treatment with the same number of parental fish. Average N(e) /N ratios were ranged from 0.5 to 1.4, and were high especially in the long breeding period with small and medium parental number treatments. Our results suggest that the spawning intervals of females and alternative mating behaviors of males influence the effective size and genetic diversity of offspring in bitterling. To maintain the genetic diversity of captive T. tanago, we recommend that captive breeding programs should be conducted for a sufficiently long period with an optimal level of parental density, as well as using an adequate number of parents. © 2011 Wiley Periodicals, Inc.
Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network
NASA Astrophysics Data System (ADS)
Park, Maru; Jo, Jung Hyun; Cho, Sungki; Choi, Jin; Kim, Chun-Hwey; Park, Jang-Hyun; Yim, Hong-Suh; Choi, Young-Jun; Moon, Hong-Kyu; Bae, Young-Ho; Park, Sun-Youp; Kim, Ji-Hye; Roh, Dong-Goo; Jang, Hyun-Jung; Park, Young-Sik; Jeong, Min-Ji
2015-12-01
By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.
Effective transient behaviour of inclusions in diffusion problems
NASA Astrophysics Data System (ADS)
Brassart, Laurence; Stainier, Laurent
2018-06-01
This paper is concerned with the effective transport properties of heterogeneous media in which there is a high contrast between the phase diffusivities. In this case the transient response of the slow phase induces a memory effect at the macroscopic scale, which needs to be included in a macroscopic continuum description. This paper focuses on the slow phase, which we take as a dispersion of inclusions of arbitrary shape. We revisit the linear diffusion problem in such inclusions in order to identify the structure of the effective (average) inclusion response to a chemical load applied on the inclusion boundary. We identify a chemical creep function (similar to the creep function of viscoelasticity), from which we construct estimates with a reduced number of relaxation modes. The proposed estimates admit an equivalent representation based on a finite number of internal variables. These estimates allow us to predict the average inclusion response under arbitrary time-varying boundary conditions at very low computational cost. A heuristic generalisation to concentration-dependent diffusion coefficient is also presented. The proposed estimates for the effective transient response of an inclusion can serve as a building block for the formulation of multi-inclusion homogenisation schemes.
Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2016-04-01
The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.
Kowalik, William S.; Marsh, Stuart E.; Lyon, Ronald J. P.
1982-01-01
A method for estimating the reflectance of ground sites from satellite radiance data is proposed and tested. The method uses the known ground reflectance from several sites and satellite data gathered over a wide range of solar zenith angles. The method was tested on each of 10 different Landsat images using 10 small sites in the Walker Lake, Nevada area. Plots of raw Landsat digital numbers (DNs) versus the cosine of the solar zenith angle (cos Z) for the the test areas are linear, and the average correlation coefficients of the data for Landsat bands 4, 5, 6, and 7 are 0.94, 0.93, 0.94, and 0.94, respectively. Ground reflectance values for the 10 sites are proportional to the slope of the DN versus cos Z relation at each site. The slope of the DN versus cos Z relation for seven additional sites in Nevada and California were used to estimate the ground reflectances of those sites. The estimates for nearby sites are in error by an average of 1.2% and more distant sites are in error by 5.1%. The method can successfully estimate the reflectance of sites outside the original scene, but extrapolation of the reflectance estimation equations to other areas may violate assumptions of atmospheric homogeneity.
Results and evaluation of a survey to estimate Pacific walrus population size, 2006
Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley
2011-01-01
In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2005-01-01
Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.
SALGADO, MARÍA V.; PÉREZ, ADRIANA; ABAD-VIVERO, ERIKA N.; THRASHER, JAMES F.; SARGENT, JAMES D.; MEJÍA, RAÚL
2016-01-01
Background Smoking scenes in movies promote adolescent smoking onset; thus, the analysis of the number of images of smoking in movies really reaching adolescents has become a subject of increasing interest. Objective The aim of this study was to estimate the level of exposure to images of smoking in movies watched by adolescents in Argentina and Mexico. Methods First-year secondary school students from Argentina and Mexico were surveyed. One hundred highest-grossing films from each year of the period 2009-2013 (Argentina) and 2010-2014 (Mexico) were analyzed. Each participant was assigned a random sample of 50 of these movies and was asked if he/she had watched them. The total number of adolescents who had watched each movie in each country was estimated and was multiplied by the number of smoking scenes (occurrences) in each movie to obtain the number of gross smoking impressions seen by secondary school adolescents from each country. Results Four-hundred and twenty-two movies were analyzed in Argentina and 433 in Mexico. Exposure to more than 500 million smoking impressions was estimated for adolescents in each country, averaging 128 and 121 minutes of smoking scenes seen by each Argentine and Mexican adolescent, respectively. Although 15, 16 and 18-rated movies had more smoking scenes in average, movies rated for younger teenagers were responsible for the highest number of smoking scenes watched by the students (67.3% in Argentina and 54.4% in Mexico) due to their larger audience. Conclusion At the population level, movies aimed at children are responsible for the highest tobacco burden seen by adolescents. PMID:27354756
Use of the SONET score to evaluate Urgent Care Center overcrowding: a prospective pilot study
Wang, Hao; Robinson, Richard D; Cowden, Chad D; Gorman, Violet A; Cook, Christopher D; Gicheru, Eugene K; Schrader, Chet D; Jayswal, Rani D; Zenarosa, Nestor R
2015-01-01
Objectives To derive a tool to determine Urgent Care Center (UCC) crowding and investigate the association between different levels of UCC overcrowding and negative patient care outcomes. Design Prospective pilot study. Setting Single centre study in the USA. Participants 3565 patients who registered at UCC during the 21-day study period were included. Patients who had no overcrowding statuses estimated due to incomplete collection of operational variables at the time of registration were excluded in this study. 3139 patients were enrolled in the final data analysis. Primary and secondary outcome measures A crowding estimation tool (SONET: Severely overcrowded, Overcrowded and Not overcrowded Estimation Tool) was derived using the linear regression analysis. The average length of stay (LOS) in UCC patients and the number of left without being seen (LWBS) patients were calculated and compared under the three different levels of UCC crowding. Results Four independent operational variables could affect the UCC overcrowding score including the total number of patients, the number of results pending for patients, the number of patients in the waiting room and the longest time a patient was stationed in the waiting room. In addition, UCC overcrowding was associated with longer average LOS (not overcrowded: 133±76 min, overcrowded: 169±79 min, and severely overcrowded: 196±87 min, p<0.001) and an increased number of LWBS patients (not overcrowded: 0.28±0.69 patients, overcrowded: 0.64±0.98, and severely overcrowded: 1.00±0.97). Conclusions The overcrowding estimation tool (SONET) derived in this study might be used to determine different levels of crowding in a high volume UCC setting. It also showed that UCC overcrowding might be associated with negative patient care outcomes. PMID:25872940
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Model selection bias and Freedman's paradox
Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.
2010-01-01
In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.
Brysbaert, Marc; Stevens, Michaël; Mandera, Paweł; Keuleers, Emmanuel
2016-01-01
Based on an analysis of the literature and a large scale crowdsourcing experiment, we estimate that an average 20-year-old native speaker of American English knows 42,000 lemmas and 4,200 non-transparent multiword expressions, derived from 11,100 word families. The numbers range from 27,000 lemmas for the lowest 5% to 52,000 for the highest 5%. Between the ages of 20 and 60, the average person learns 6,000 extra lemmas or about one new lemma every 2 days. The knowledge of the words can be as shallow as knowing that the word exists. In addition, people learn tens of thousands of inflected forms and proper nouns (names), which account for the substantially high numbers of ‘words known’ mentioned in other publications. PMID:27524974
2010-01-01
Background Estimating the economic impact of influenza is complicated because the disease may have non-specific symptoms, and many patients with influenza are registered with other diagnoses. Furthermore, in some countries like Norway, employees can be on paid sick leave for a specified number of days without a doctor's certificate ("self-reported sick leave") and these sick leaves are not registered. Both problems result in gaps in the existing literature: costs associated with influenza-related illness and self-reported sick leave are rarely included. The aim of this study was to improve estimates of total influenza-related health-care costs and productivity losses by estimating these missing costs. Methods Using Norwegian data, the weekly numbers of influenza-attributable hospital admissions and certified sick leaves registered with other diagnoses were estimated from influenza-like illness surveillance data using quasi-Poisson regression. The number of self-reported sick leaves was estimated using a Monte-Carlo simulation model of illness recovery curves based on the number of certified sick leaves. A probabilistic sensitivity analysis was conducted on the economic outcomes. Results During the 1998/99 through 2005/06 influenza seasons, the models estimated an annual average of 2700 excess influenza-associated hospitalizations in Norway, of which 16% were registered as influenza, 51% as pneumonia and 33% were registered with other diagnoses. The direct cost of seasonal influenza totaled US$22 million annually, including costs of pharmaceuticals and outpatient services. The annual average number of working days lost was predicted at 793 000, resulting in an estimated productivity loss of US$231 million. Self-reported sick leave accounted for approximately one-third of the total indirect cost. During a pandemic, the total cost could rise to over US$800 million. Conclusions Influenza places a considerable burden on patients and society with indirect costs greatly exceeding direct costs. The cost of influenza-attributable complications and the cost of self-reported sick leave represent a considerable part of the economic burden of influenza. PMID:21106057
The effect of airline deregulation on automobile fatalities.
Bylow, L F; Savage, I
1991-10-01
This paper attempts to quantify the effects of airline deregulation in the United States on intercity automobile travel and consequently on the number of highway fatalities. A demand model is constructed for auto travel, which includes variables representing the price and availability of air service. A reduced form model of the airline market is then estimated. Finding that deregulation has decreased airfares and increased flights, it is estimated that auto travel has been reduced by 2.2% per year on average. Given assumptions on the characteristics of drivers switching modes and the types of roads they drove on, the number of automobile fatalities averted since 1978 is estimated to be in the range 200-300 per year.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2009-01-01
Examination of the decadal variation of the number of El Nino onsets and El Nino-related months for the interval 1950-2008 clearly shows that the variation is better explained as one expressing normal fluctuation and not one related to global warming. Comparison of the recurrence periods for El Nino onsets against event durations for moderate/strong El Nino events results in a statistically important relationship that allows for the possible prediction of the onset for the next anticipated El Nino event. Because the last known El Nino was a moderate event of short duration (6 months), having onset in August 2006, unless it is a statistical outlier, one expects the next onset of El Nino probably in the latter half of 2009, with peak following in November 2009-January 2010. If true, then initial early extended forecasts of frequencies of tropical cyclones for the 2009 North Atlantic basin hurricane season probably should be revised slightly downward from near average-to-above average numbers to near average-to-below average numbers of tropical cyclones in 2009, especially as compared to averages since 1995, the beginning of the current high-activity interval for tropical cyclone activity.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
Search for Protoplanetary and Debris Disks Around Millisecond Pulsars
1995-10-06
Protoplanetary and Debris Disks Around Millisecond Pulsars 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...1 9 9 6 A p J . . . 4 6 0 . . 9 0 2 F Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
2004-01-01
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES...Ethacrynic Acid Reduces 5b. GRANT NUMBER Sulfur Mustard Toxicity 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Gross, CL, Nipwoda, MT, Nealley
International Symposium on 21st Century Challenges in Computational Engineering and Science
2010-02-26
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 ...it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1 . REPORT DATE (DD-MM-YYYY) 26...CHALLENGES IN COMPUTATIONAL ENGINEERING AND SCIENCE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-09- 1 -0648 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
2013-05-22
DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 22-05-2013 2. REPORT TYPE Final Report 3. DATES...COVERED (From - To) 1 Oct 2007 to 30 Sep 2012 4. TITLE AND SUBTITLE Geospace Plasma Dynamics 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
2017-09-01
AWARD NUMBER: W81XWH-16-1-0395 TITLE: Reactivating Neural Circuits with Clinically Accessible Stimulation to Restore Hand Function in...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...Clinically Accessible Stimulation to Restore Hand Function in Persons with Tetraplegia 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
ERIC Educational Resources Information Center
Hilgenkamp, Thessa; Van Wijck, Ruud; Evenhuis, Heleen
2012-01-01
The minimum number of days of pedometer monitoring needed to estimate valid average weekly step counts and reactivity was investigated for older adults with intellectual disability. Participants (N = 268) with borderline to severe intellectual disability ages 50 years and older were instructed to wear a pedometer for 14 days. The outcome measure…
Educating Normal Breast Mucosa to Prevent Breast Cancer
2013-05-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average...cells 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U...b. ABSTRACT U c. THIS PAGE U UU 19b. TELEPHONE NUMBER (include area code) Table of Contents Page
Estimating the Critical Point of Crowding in the Emergency Department for the Warning System
NASA Astrophysics Data System (ADS)
Chang, Y.; Pan, C.; Tseng, C.; Wen, J.
2011-12-01
The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.
Management of Noncompressible Hemorrhage Using Vena Cava Ultrasound
2017-10-01
AWARD NUMBER: W81XWH-15-1-0709 TITLE: Management of Noncompressible Hemorrhage Using Vena Cava Ultrasound PRINCIPAL INVESTIGATOR: Donald...No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... information as follows: Table 1--Estimated Annual Reporting Burden \\1\\ Number of Average Activity/21 CFR... 16 40 640 822.30) Periodic reports (Sec. 822.38) 131 3 393 40 15,720 Total 33,360 \\1\\ There are no.... Explanation of Reporting Burden Estimate. The burden captured in table 1 of this document is based on the data...
Pathfinder. Volume 9, Number 2, March/April 2011
2011-03-01
vides audio, video, desktop sharing and chat.” The platform offers a real-time, Web- based presentation tool to create information and general...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or
Floristic summary of 22 National Parks in the Midwestern United States
Bennett, J.P.
1996-01-01
Biological diversity is studied at many geographical scales, but specimen collecting is invariably done at a local level. Collecting of animal and plant specimens leads to the compilation of checklists for multiple small areas, which are sometimes merged to produce larger, regional checklists. Such an approach was employed to study the regional vascular flora of 22 national parks of the midwestern United States. Total number of plant taxa (species level and below) ranged from 86 at Hopewell Culture National Historical Park to 1,399 at Indiana Dunes National Lakeshore and averaged 520 per park. Infraspecific taxa were 12% or less of all taxa at all parks and averaged 7%. Genera per parkranged from 70 to 562, and families ranged from 41 to 145. Non-native species averaged 95 per park, or about 27% on average of the total number of taxa per park. The aggregated regional flora contained just over 2,900 taxa, 828 genera and 160 families. Eleven percent of the taxa were below the species level. Almost 17% of the taxa were non-native, a relatively large percentage, but not out of the range of percentages reported in the literature. The observed and estimated numbers of taxa for this region were in good agreement with other estimates for these latitudes and for a standard regional size. However, the parks do not represent their respective state floras very well when they are aggregated at that scale. Indiana was the best represented state with 65% of the state flora found in the parks, while only 25% of each state's flora were represented by parks in Iowa, Kansas, and Nebraska, and the average representation was only 42%.
The application of a geometric optical canopy reflectance model to semiarid shrub vegetation
NASA Technical Reports Server (NTRS)
Franklin, Janet; Turner, Debra L.
1992-01-01
Estimates are obtained of the average plant size and density of shrub vegetation on the basis of SPOT High Resolution Visible Multispectral imagery from Chihuahuan desert areas, using the Li and Strahler (1985) model. The aggregated predictions for a number of stands within a class were accurate to within one or two standard errors of the observed average value. Accuracy was highest for those classes of vegetation where the nonrandom scrub pattern was characterized for the class on the basis of the average coefficient of the determination of density.
Proportion of recovered waterfowl bands reported
Geis, A.D.; Atwood, E.L.
1961-01-01
Data from the annual mail survey of waterfowl hunters in the United States were used to estimate the total numbers of banded waterfowl that were shot. These estimates were compared with Banding Office records to estimate the proportion of recovered bands that was reported. On the average, about two banded birds were recovered for each one reported. The proportion reported was higher for some areas and for some species than for others. The proportion reported was higher when more of the reports came through employees of conservation agencies.
Keich, Uri; Noble, William Stafford
2017-01-01
Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989
Full-chain health impact assessment of traffic-related air pollution and childhood asthma.
Khreis, Haneen; de Hoogh, Kees; Nieuwenhuijsen, Mark J
2018-05-01
Asthma is the most common chronic disease in children. Traffic-related air pollution (TRAP) may be an important exposure contributing to its development. In the UK, Bradford is a deprived city suffering from childhood asthma rates higher than national and regional averages and TRAP is of particular concern to the local communities. We estimated the burden of childhood asthma attributable to air pollution and specifically TRAP in Bradford. Air pollution exposures were estimated using a newly developed full-chain exposure assessment model and an existing land-use regression model (LUR). We estimated childhood population exposure to NO x and, by conversion, NO 2 at the smallest census area level using a newly developed full-chain model knitting together distinct traffic (SATURN), vehicle emission (COPERT) and atmospheric dispersion (ADMS-Urban) models. We compared these estimates with measurements and estimates from ESCAPE's LUR model. Using the UK incidence rate for childhood asthma, meta-analytical exposure-response functions, and estimates from the two exposure models, we estimated annual number of asthma cases attributable to NO 2 and NO x in Bradford, and annual number of asthma cases specifically attributable to traffic. The annual average census tract levels of NO 2 and NO x estimated using the full-chain model were 15.41 and 25.68 μg/m 3 , respectively. On average, 2.75 μg/m 3 NO 2 and 4.59 μg/m 3 NO x were specifically contributed by traffic, without minor roads and cold starts. The annual average census tract levels of NO 2 and NO x estimated using the LUR model were 21.93 and 35.60 μg/m 3 , respectively. The results indicated that up to 687 (or 38% of all) annual childhood asthma cases in Bradford may be attributable to air pollution. Up to 109 cases (6%) and 219 cases (12%) may be specifically attributable to TRAP, with and without minor roads and cold starts, respectively. This is the first study undertaking full-chain health impact assessment of TRAP and childhood asthma in a disadvantaged population with public concern about TRAP. It further adds to scarce literature exploring the impact of different exposure assessments. In conservative estimates, air pollution and TRAP are estimated to cause a large, but largely preventable, childhood asthma burden. Future progress with childhood asthma requires a move beyond the prevalent disease control-based approach toward asthma prevention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cheung, Pui Kwan; Fok, Lincoln
2017-10-01
Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cervantes, Claudio Alberto Dávila; Botero, Marcela Agudelo
2014-05-01
The objective of this study was to calculate average years of life lost due to breast and cervical cancer in Mexico in 2000 and 2010. Data on mortality in women aged between 20 and 84 years was obtained from the National Institute for Statistics and Geography. Age-specific mortality rates and average years of life lost, which is an estimate of the number of years that a person would have lived if he or she had not died prematurely, were estimated for both diseases. Data was disaggregated into five-year age groups and socioeconomic status based on the 2010 marginalization index obtained from the National Population Council. A decrease in average years of life lost due to cervical cancer (37.4%) and an increase in average years of life lost due breast cancer (8.9%) was observed during the period studied. Average years of life lost due to cervical cancer was greater among women living in areas with a high marginalization index, while average years of life lost due to breast cancer was greater in women from areas with a low marginalization index.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Ryde, Ulf
2017-11-14
Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.
Angeletti, C; Pezzotti, P; Antinori, A; Mammone, A; Navarra, A; Orchi, N; Lorenzini, P; Mecozzi, A; Ammassari, A; Murachelli, S; Ippolito, G; Girardi, E
2014-03-01
Combination antiretroviral therapy (cART) has become the main driver of total costs of caring for persons living with HIV (PLHIV). The present study estimated the short/medium-term cost trends in response to the recent evolution of national guidelines and regional therapeutic protocols for cART in Italy. We developed a deterministic mathematical model that was calibrated using epidemic data for Lazio, a region located in central Italy with about six million inhabitants. In the Base Case Scenario, the estimated number of PLHIV in the Lazio region increased over the period 2012-2016 from 14 414 to 17 179. Over the same period, the average projected annual cost for treating the HIV-infected population was €147.0 million. An earlier cART initiation resulted in a rise of 2.3% in the average estimated annual cost, whereas an increase from 27% to 50% in the proportion of naïve subjects starting cART with a nonnucleoside reverse transcriptase inhibitor (NNRTI)-based regimen resulted in a reduction of 0.3%. Simplification strategies based on NNRTIs co-formulated in a single tablet regimen and protease inhibitor/ritonavir-boosted monotherapy produced an overall reduction in average annual costs of 1.5%. A further average saving of 3.3% resulted from the introduction of generic antiretroviral drugs. In the medium term, cost saving interventions could finance the increase in costs resulting from the inertial growth in the number of patients requiring treatment and from the earlier treatment initiation recommended in recent guidelines. © 2013 British HIV Association.
Alcohol drinking behaviour and economic cost incurred by users in Khon Kaen.
Paileeklee, Suchada; Kanato, Manop; Kaenmanee, Sumeth; McGhee, Sarah M
2010-03-01
Alcohol consumption increases health risks and social consequences. It also lowers productivity resulting in economic losses for drinkers and the rest of society. To investigate alcohol drinking behavior and to estimate economic cost incurred by alcohol users in Khon Kaen province in 2007. A cross-sectional survey targeting the population aged 12-65 years old was conducted in 20 communities. Data were collected using full-structured questionnaires through interviews. Among 1,053 respondents, 53.0% drank alcohol sometime in their lives (95% CI: 46.1, 59.9). The percentage of individuals drinking in the past 12 months was 43.3% (95% CI: 37.1, 49.5). The average number of drinking days in past 12 months was 36.8 days. Most respondents drank for social activities, mainly with friends and relatives. Individual costs of alcohol consumption varied greatly. The weighted average cost in 2007 was 975.5 Baht per drinker. The estimated overall cost of alcohol consumption in Khon Kaen, in 2007, was 691.2 million Baht (95% CI: 280.0, 1,102.3 million), or 502.9 Baht per capita. More than half of the Khon Kaen population drank alcohol sometime in their lives and 43.3% were current drinkers. The average number of drinking days in past 12 months was 36.8 days. The estimated cost of alcohol consumption in Khon Kaen province was enormous.
Wu, Yifeng; Zhao, Fengmin; Qian, Xujun; Xu, Guozhang; He, Tianfeng; Shen, Yueping; Cai, Yibiao
2015-07-01
To describe the daily average concentration of sulfur dioxide (SO2) in Ningbo, and to analysis the health impacts it caused in upper respiratory disease. With outpatients log and air pollutants monitoring data matched in 2011-2013, the distributed lag non-linear models were used to analysis the relative risk of the number of upper respiratory patients associated with SO2, and also excessive risk, and the inferred number of patients due to SO2 pollution. The daily average concentration of SO2 didn't exceed the limit value of second class area. The coefficient of upper respiratory outpatient number and daily average concentration of SO2 matched was 0.44,with the excessive risk was 10% to 18%, the lag of most SO2 concentrations was 4 to 6 days. It could be estimated that about 30% of total upper respiratory outpatients were caused by SO2 pollution. Although the daily average concentration of SO2 didn't exceed the standard in 3 years, the health impacts still be caused with lag effect.
30 CFR 207.1 - Required recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... part have been approved by OMB under 44 U.S.C. 3501 et seq. and assigned OMB Clearance Number 1010-0061... Management Act of 1982, 30 U.S.C. 1701 et seq. (b) Public reporting burden is estimated to average 30 minutes...
Ziadi, C; Mocé, M L; Laborda, P; Blasco, A; Santacreu, M A
2013-07-01
The aim of this work was to estimate direct and correlated responses in survival rates in an experiment of selection for ovulation rate (OR) and litter size (LS) in a line of rabbits (OR_LS). From generation 0 to 6 (first selection period), females were selected only for second gestation OR estimated by laparoscopy. From generation 7 to 13 (second selection period), a 2-stage selection for OR and LS was performed. In stage 1, females having the greatest OR at second gestation were selected. In stage 2, selection was for the greatest average LS of the first 2 parities of the females selected in stage 1. Total selection pressure in females was about 30%. The line had approximately 17 males and 75 females per generation. Traits recorded were OR estimated as the number of corpora lutea in both ovaries, number of implanted embryos (IE) estimated as the number of implantation sites, LS estimated as total number of rabbits born recorded at each parity, embryo survival (ES) estimated as IE/OR, fetal survival (FS) estimated as LS/IE, and prenatal survival (PS) estimated as LS/OR. Data were analyzed using Bayesian methodology. The estimated heritabilities of LS, OR, IE, ES, FS, and PS were 0.07, 0.21, 0.10, 0.07, 0.12, and 0.16, respectively. Direct and correlated responses from this study were estimated in each period of selection as the difference between the average genetic values of last and first generation. In the first selection period, OR increased 1.36 ova, but no correlated response was observed in LS due to a decrease on FS. Correlated responses for IE, ES, FS, and PS in the first selection period were 1.11, 0.00, -0.04, and -0.01, respectively. After 7 generations of 2-stage selection for OR and LS, OR increased 1.0 ova and response in LS was 0.9 kits. Correlated responses for IE, ES, FS, and PS in the second selection period were 1.14, 0.02, 0.02, and 0.07, respectively. Two-stage selection for OR and LS can be a promising procedure to improve LS in rabbits.
The burden of gunshot injuries on orthopaedic healthcare resources in South Africa.
Martin, Case; Thiart, Gerhard; McCollum, Graham; Roche, Stephen; Maqungo, Sithombo
2017-06-30
Injuries inflicted by gunshot wounds (GSWs) are an immense burden on the South African (SA) healthcare system. In 2005, Allard and Burch estimated SA state hospitals treated approximately 127 000 firearm victims annually and concluded that the cost of treating an abdominal GSW was approximately USD1 467 per patient. While the annual number of GSW injuries has decreased over the past decade, an estimated 54 870 firearm-related injuries occurred in SA in 2012. No study has estimated the burden of these GSWs from an orthopaedic perspective. To estimate the burden and average cost of treating GSW victims requiring orthopaedic interventions in an SA tertiary level hospital. This retrospective study surveyed more than 1 500 orthopaedic admissions over a 12-month period (2012) at Groote Schuur Hospital, Cape Town, SA. Chart review subsequently yielded data that allowed analysis of cost, theatre time, number and type of implants, duration of admission, diagnostic imaging studies performed, blood products used, laboratory studies ordered and medications administered. A total of 111 patients with an average age of 28 years (range 13 - 74) were identified. Each patient was hit by an average of 1.69 bullets (range 1 - 7). These patients sustained a total of 147 fractures, the majority in the lower extremities. Ninety-five patients received surgical treatment for a total of 135 procedures, with a cumulative surgical theatre time of >306 hours. Theatre costs, excluding implants, were in excess of USD94 490. Eighty of the patients received a total of 99 implants during surgery, which raised theatre costs an additional USD53 381 cumulatively, or USD667 per patient. Patients remained hospitalised for an average of 9.75 days, and total ward costs exceeded USD130 400. Individual patient costs averaged about USD2 940 (ZAR24 945) per patient. This study assessed the burden of orthopaedic firearm injuries in SA. It was estimated that on average, treating an orthopaedic GSW patient cost USD2 940, used just over 3 hours of theatre time per operation, and necessitated a hospital bed for an average period of 9.75 days. Improved understanding of the high incidence of orthopaedic GSWs treated in an SA tertiary care trauma centre and the costs incurred will help the state healthcare system better prioritise orthopaedic trauma funding and training opportunities, while also supporting cost-saving measures, including redirection of financial resources to primary prevention initiatives.
Abraham, Sara A; Kearfott, Kimberlee J; Jawad, Ali H; Boria, Andrew J; Buth, Tobias J; Dawson, Alexander S; Eng, Sheldon C; Frank, Samuel J; Green, Crystal A; Jacobs, Mitchell L; Liu, Kevin; Miklos, Joseph A; Nguyen, Hien; Rafique, Muhammad; Rucinski, Blake D; Smith, Travis; Tan, Yanliang
2017-03-01
Optically-stimulated luminescent dosimeters are capable of being interrogated multiple times post-irradiation. Each interrogation removes a fraction of the signal stored within the optically-stimulated luminescent dosimeter. This signal loss must be corrected to avoid systematic errors in estimating the average signal of a series of optically-stimulated luminescent dosimeter interrogations and requires a minimum number of consecutive readings to determine an average signal that is within a desired accuracy of the true signal with a desired statistical confidence. This paper establishes a technical basis for determining the required number of readings for a particular application of these dosimeters when using certain OSL dosimetry systems.
Obtaining Acoustic Cue Rate Estimates for Some Mysticete Species Using Existing Data
2015-09-30
several species of endangered mysticete whales on and near U.S. Navy ranges, using existing recordings from both the SCORE and PMRF hydrophone...Mysticete Species using Existing Data Tyler A. Helble Marine Mammals and Autonomous Underwater Vehicles, Code 56440 Space and Naval Warfare Systems...acoustics, one must know the species -specific average cue rate, which is the average number of calls produced per animal per time. The cue rate can
Respiratory Highlights, 2015 - 2016 Influenza Season (4 October 2015 - 1 October 2016)
2016-12-09
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour ...Flu Report 3. DATES COVERED (From – To) October 2015 – October 2016 4 . TITLE AND SUBTITLE Respiratory Highlights, 2015-2016 Influenza Season... 4 October 2015 - 1 October 2016) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jeffrey Thervil 5d
Respiratory Highlights: 2015-2016 Influenza Season (4 October 2015 - 1 October 2016)
2016-12-09
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour ...Flu Report 3. DATES COVERED (From – To) October 2015 – October 2016 4 . TITLE AND SUBTITLE Respiratory Highlights, 2015-2016 Influenza Season... 4 October 2015 - 1 October 2016) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jeffrey Thervil 5d
Seabasing Innovation Cell Transfer of Goods at Sea’
2004-03-01
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining...does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 1 -Mar...2004 2. REPORT TYPE Final 3. DATES COVERED (From - To) 1 -Apr-2003 – 1 -Jul-2003 5a. CONTRACT NUMBER N0002401 WX 20594 5b. GRANT NUMBER 4. TITLE
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Best, Kate E; Glinianaia, Svetlana V; Lingam, Raghu; Morris, Joan K; Rankin, Judith
2018-05-19
Children with major congenital anomalies often require lifelong access to health and social care services. Estimating future numbers of affected individuals can aid health and social care planning. This study aimed to estimate the number of children aged 0-15 years living with spina bifida or Down syndrome in England and Wales by 2020. Cases of spina bifida and Down syndrome born during 1998-2013 were identified from the Northern Congenital Abnormality Survey and the National Down Syndrome Cytogenetic Register, respectively. The number of infants born with spina bifida during 1998-2019 were estimated by applying the average prevalence rate in the North of England to actual and projected births in England and Wales. Poisson regression was performed to estimate the number of infants born with Down syndrome in England and Wales during 1998-2013 and 2004-2019. The numbers of children aged 0-15 living with spina bifida or Down syndrome in 2014 and in 2020 were then estimated by multiplying year- and age-specific survival estimates by the number of affected births. An estimated 956 children with isolated spina bifida, 623 children with spina bifida and hydrocephalus and 11,592 children with Down syndrome aged 0-15 years will be living in England and Wales by 2020, increases of 7.2%, 12.0% and 12.7% since 2014, respectively. Due to improvements in survival, an increase in population size and changes in maternal age distribution at delivery, we anticipate further increases in the number of children living with spina bifida or Down syndrome by 2020. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Population size, survival, and movements of white-cheeked pintails in Eastern Puerto Rico
Collazo, J.A.; Bonilla-Martinez, G.
2001-01-01
We estimated numbers and survival of White-cheeked Pintails (Anas bahamensis) in eastern Puerto Rico during 1996-1999. We also quantified their movements between Culebra Island and the Humacao Wildlife Refuge, Puerto Rico. Mark-resight population size estimates averaged 1020 pintails during nine, 3-month sampling periods from January 1997 to June 1999. On average, minimum regional counts were 38 % lower than mark-resight estimates (mean = 631). Adult survival was 0.51 ?? 0.09 (SE). This estimate is similar for other anatids of similar size but broader geographic distribution. The probability of pintails surviving and staying in Humacao was hiher (67 %) than for counterparts on Culebra (31 %). The probability of surviving and moving from Culebra to Humacao (41 %) was higher than from Humacao to Culebra (20 %). These findings, and available information on reproduction, indicate that the Humacao Wildlife Refuge refuge has an important role in the regional demography of pintails. Our findings on population numbers and regional survival are encouraging, given concerns about the species' status due to habitat loss and hunting. However, our outlook for the species is tempered by the remaining gaps in the population dynamics of pintails; for examples, survival estimates of broods and fledglings (age 0-1) are needed for a comprehensive status assessment. Until additional data are obtianed, White-cheeked Pintails should continue to be protectd from hunting in Puerto Rico.
Fliege, Herbert; Grimm, Anne; Eckhardt-Henn, Annegret; Gieler, Uwe; Martin, Katharina; Klapp, Burghard F
2007-01-01
The authors surveyed physicians for frequency estimates of factitious disorder among their patients. Twenty-six physicians in independent practice and 83 senior hospital consultants in internal medicine, surgery, neurology, and dermatology participated. They completed a questionnaire including the estimated 1-year prevalence of factitious disorder among their patients. Frequency estimates averaged 1.3% (0.0001%-15%). The number of patients treated correlated negatively with frequency estimates. Dermatologists and neurologists gave the highest estimations. One-third of the physicians rated themselves as insufficiently informed. Frequency estimations did not differ by information level. The estimated frequency is substantial and comparable to earlier findings. Authors discuss clinical implications.
Measuring Blast-Related Intracranial Pressure Within the Human Head
2011-02-01
AWARD NUMBER: W81XWH-09- 1 -0498 TITLE: Measuring Blast-Related Intracranial Pressure Within...REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE 1 February 2011 2. REPORT TYPE Final 3. DATES
A Biophysico-computational Perspective of Breast Cancer Pathogenesis and Treatment Response
2006-03-01
of Breast Cancer Pathogenesis and Treatment Response PRINCIPAL INVESTIGATOR: Valerie M. Weaver Ph.D. CONTRACTING...burden for this collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing...Biophysico-computational Perspective of Breast Cancer Pathogenesis and 5a. CONTRACT NUMBER Treatment Response 5b. GRANT NUMBER W81XWH-05-1-0330 5c
Predicting Cigarette Initiation and Re-Initiation Among Active Duty Air Force Recruits
2017-10-17
interventions, focusing on benefits associated with being tobacco- free and avoiding costs of tobacco use, an approach incorporating principles of behavioral...REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...REPORT 210-292-7141 NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 13. SUPPLEMENTARY
Preston Probe Calibrations at High Reynolds Number
NASA Technical Reports Server (NTRS)
Smits, Alexander J.
1998-01-01
The overall goal of the research effort is to study the performance of two Preston probes designed by NASA Langley Research Center across an unprecedented range of Reynolds number (based on friction velocity and probe diameter), and perform an accurate calibration over the same Reynolds number range. Using the Superpipe facility in Princeton, two rounds of experiments were performed. In each round of experiments for each Reynolds number, the pressure gradient, static pressure from the Preston probes and the total pressure from the Preston probes were measured. In the first round, 3 Preston probes having outer diameters of 0.058 inches, 0.083 inches and 0.203 inches were tested over a large range of pipe Reynolds numbers. Two data reduction methods were employed: first, the static pressure measured on the Preston probe was used to calculate P (modified Preston probe configuration), and secondly, the static pressure measured at the reference pressure tap was used to calculate P (un-modified Preston probe configuration). For both methods, the static pressure was adjusted to correspond with the static pressure at the Preston probe tip using the pressure gradient. The measurements for Preston probes with diameters of 0.058 inches, and 0.083 inches respectively were performed in the test pipe before it was polished a second time. Therefore, the measurements at high pipe Reynolds numbers may have been affected by roughness. In the second round of experiments the 0.058 inches and 0.083 inches diameter, un-modified probes were tested after the pipe was polished and prepared to ensure that the surface was smooth. The average velocity was estimated by assuming that the connection between the centerline velocity and the average velocity was known, and by using a Pitot tube to measure the centerline velocity. A preliminary error estimate suggests that it is possible to introduce a 1% to 2% error in estimating the average velocity using this approach. The evidence on the errors attending the second data set is somewhat circumstantial, and the measurements have not been repeated using a better approach, it seems probable that the correlation given applies to un-modified Preston probes over the range 6.4 less than x* less than 11.3.
Launch and Recovery System Literature Review
2010-12-01
information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing he collection of information. Send comments regarding this burden estimate or any other aspect...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURNYOU FORM TO THE AVOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 31-12
Enhancing Quality of Orthotic Services with Process and Outcome Information
2017-10-01
AWARD NUMBER: W81XWH-16-1-0788 TITLE: Enhancing Quality of Orthotic Services with Process and Outcome Information PRINCIPAL INVESTIGATOR...OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this
2007-03-01
Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation
The influence of incubation time on adenovirus quantitation in A549 cells by most probable number.
Cashdollar, Jennifer L; Huff, Emma; Ryu, Hodon; Grimm, Ann C
2016-11-01
Cell culture based assays used to detect waterborne viruses typically call for incubating the sample for at least two weeks in order to ensure that all the culturable virus present is detected. Historically, this estimate was based, at least in part, on the length of time used for detecting poliovirus. In this study, we have examined A549 cells infected with human adenovirus type 2, and have found that a three week incubation of virus infected cells results in a higher number of detected viruses by quantal assay than what is seen after two weeks of incubation, with an average 955% increase in Most Probable Number (MPN) from 2 weeks to 3 weeks. This increase suggests that the extended incubation time is essential for accurately estimating viral titer, particularly for slow-growing viruses, UV treated samples, or samples with low titers of virus. In addition, we found that for some UV-treated samples, there was no detectable MPN at 2 weeks, but after 3 weeks, MPN values were obtained. For UV-treated samples, the average increase in MPN from 2 weeks to 3 weeks was 1401%, while untreated samples averaged a change in MPN of 674%, leading us to believe that the UV-damaged viral DNA may be able to be repaired such that viral replication then occurs. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
Risk of symptomatic dengue for foreign visitors to the 2014 FIFA World Cup in Brazil
Massad, Eduardo; Wilder-Smith, Annelies; Ximenes, Raphael; Amaku, Marcos; Lopez, Luis Fernandez; Coutinho, Francisco Antonio Bezerra; Coelho, Giovanini Evelim; da Silva, Jarbas Barbosa; Struchiner, Claudio José; Burattini, Marcelo Nascimento
2014-01-01
Brazil will host the FIFA World Cup™, the biggest single-event competition in the world, from June 12-July 13 2014 in 12 cities. This event will draw an estimated 600,000 international visitors. Brazil is endemic for dengue. Hence, attendees of the 2014 event are theoretically at risk for dengue. We calculated the risk of dengue acquisition to non-immune international travellers to Brazil, depending on the football match schedules, considering locations and dates of such matches for June and July 2014. We estimated the average per-capita risk and expected number of dengue cases for each host-city and each game schedule chosen based on reported dengue cases to the Brazilian Ministry of Health for the period between 2010-2013. On the average, the expected number of cases among the 600,000 foreigner tourists during the World Cup is 33, varying from 3-59. Such risk estimates will not only benefit individual travellers for adequate pre-travel preparations, but also provide valuable information for public health professionals and policy makers worldwide. Furthermore, estimates of dengue cases in international travellers during the World Cup can help to anticipate the theoretical risk for exportation of dengue into currently non-infected areas. PMID:24863976
[Termites (Isoptera) in forest ecosystems of Cat Tien National Park (Southern Vietnam)].
Beliaeva, N V; Tiunov, A V
2010-01-01
The species composition and termite community populations were studied and the total land termites biomass was estimated in five forest habitats of Cat Tien National Park, Southern Vietnam. Twenty-four species of two families, Rhinotermitidae (1 species) and Termitidae (23 species), the predominant representatives of the subfamily Macrotermitinae, were found in mounds and in soil samples. On the test plots the density of termite mounds averaged 68 per hectare, primarily the mounds of three Macrotermes species. Destructive sampling allowed estimation of the caste composition and total community biomass based on six termite mounds of the prevailing species (Globitermes sulphureus, Microcerotermes burmanicus, Macrotermes carbonarius, M. gilvus, M. malaccensis, and Hypotermes obscuriceps). The total number of termites in the nests ranged from 65 000 to 3 150 000 individuals with the total biomass ranging from 185 to 2440 g live weight. The total abundance of nesting Macrotermes species alone could conservatively be estimated as 2.5 million individuals and 20.5 kg live weight per hectare. The number of soil- and litter-feeding termites averaged for the test plots was estimated at about 60 ind./m2. Four species dominating on the test plots (M. carbonarius, M. gilvus, M. malaccensis, and H. obscuriceps) belong to active tree litter feeders.
Risk of symptomatic dengue for foreign visitors to the 2014 FIFA World Cup in Brazil.
Massad, Eduardo; Wilder-Smith, Annelies; Ximenes, Raphael; Amaku, Marcos; Lopez, Luis Fernandez; Coutinho, Francisco Antonio Bezerra; Coelho, Giovanini Evelim; Silva, Jarbas Barbosa da; Struchiner, Claudio José; Burattini, Marcelo Nascimento
2014-06-01
Brazil will host the FIFA World Cup™, the biggest single-event competition in the world, from June 12-July 13 2014 in 12 cities. This event will draw an estimated 600,000 international visitors. Brazil is endemic for dengue. Hence, attendees of the 2014 event are theoretically at risk for dengue. We calculated the risk of dengue acquisition to non-immune international travellers to Brazil, depending on the football match schedules, considering locations and dates of such matches for June and July 2014. We estimated the average per-capita risk and expected number of dengue cases for each host-city and each game schedule chosen based on reported dengue cases to the Brazilian Ministry of Health for the period between 2010-2013. On the average, the expected number of cases among the 600,000 foreigner tourists during the World Cup is 33, varying from 3-59. Such risk estimates will not only benefit individual travellers for adequate pre-travel preparations, but also provide valuable information for public health professionals and policy makers worldwide. Furthermore, estimates of dengue cases in international travellers during the World Cup can help to anticipate the theoretical risk for exportation of dengue into currently non-infected areas.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
Patel, Shweta V; Driggers, Rita W; Zahn, Christopher M
2011-01-01
To estimate the effect of work hour restrictions on resident outpatient clinical experience. Schedule templates from academic years 1998-1999 (before work hour restrictions), 2002-2003 (when night float rotation was added in anticipation of work hour restrictions), and 2008-2009 (during work hour restrictions) were compared for outpatient clinic experience before and after work hour restrictions were implemented. Actual clinics on specific rotations and estimated patient encounters per scheduled clinic were considered. Between academic year (AY) 1998-1999 and AY 2008-2009 there was a generalized downward trend in average outpatient encounters for postgraduate year (PGY)-2, PGY-3 and PGY-4 residents (45%, 34% and 36%, respectively). For obstetrics, gynecology and ambulatory rotations, there was a downward trend in average outpatient encounters for each rotation type (61%, 14% and 63%, respectively). The average number of scheduled clinics per week was slightly decreased when comparing AY 1998-1999 to either AY 2002-2003 or AY 2008-2009. Rotation schedules before and after work hour restrictions demonstrated a downward trend in the number of scheduled outpatient encounters. These findings indicate a potential negative impact on preparation for clinical practice.
75 FR 45632 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... into agreements with financial institutions doing business in States for the purpose of securing...: Financial institutions doing business in two or more States. Annual Burden Estimates Number of Average... Information Collection Activity; Comment Request Proposed Projects: Title: Financial Institution Data Match...
Hammond, Davyda; Jones, Steven; Lalor, Melinda
2007-02-01
Many metropolitan transit authorities are considering upgrading transit bus fleets to decrease ambient criteria pollutant levels. Advancements in engine and fuel technology have lead to a generation of lower-emission buses in a variety of fuel types. Dynamometer tests show substantial reductions in particulate mass emissions for younger buses (<10 years) over older models, but particle number reduction has not been verified in the research. Recent studies suggest that particle number is a more important factor than particle mass in determining health effects. In-vehicle particle number concentration measurements on conventional diesel, oxidation-catalyst diesel and compressed natural gas transit buses are compared to estimate relative in-vehicle particulate exposures. Two primary consistencies are observed from the data: the CNG buses have average particle count concentrations near the average concentrations for the oxidation-catalyst diesel buses, and the conventional diesel buses have average particle count concentrations approximately three to four times greater than the CNG buses. Particle number concentrations are also noticeably affected by bus idling behavior and ventilation options, such as, window position and air conditioning.
Trpis, Milan
1973-01-01
The breeding of larvae of Aedes aegypti, Aedes simpsoni, and Eretmapodites quinquevittatus in empty shells of Achatina fulica was studied in the coastal zone of Dar es Salaam, Tanzania. The average density of shells was estimated to be 228 per ha. From 11 to 35% were positive for mosquito larvae. A. aegypti were found in 82-84% of positive shells; A. simpsoni in 8-13%. On Msasani peninsula, during the 3-month rainy season April—June 1970, the larval density of A. aegypti in shells was estimated at 1 100 per ha, that of A. simpsoni and E. quinquevittatus being estimated at 60 and 280 larvae per ha, respectively. Empty shells of A. fulica may contain up to 250 ml of water (average: 56.5 ml). The number of larvae per shell varies from 1 to 35 (average: 8.4) and it was estimated that, depending on the availability of food, and other factors, approximately 10 ml of water are required per larva. Viable eggs of A. aegypti were still to be found in 4% of the shells at the end of the dry season. PMID:4148745
Nelson, Jon P
2014-01-01
Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.
Validation of self-reported cannabis dose and potency: an ecological study.
van der Pol, Peggy; Liebregts, Nienke; de Graaf, Ron; Korf, Dirk J; van den Brink, Wim; van Laar, Margriet
2013-10-01
To assess the reliability and validity of self-reported cannabis dose and potency measures. Cross-sectional study comparing self-reports with objective measures of amount of cannabis and delta-9-tetrahydrocannabinol (THC) concentration. Ecological study with assessments at participants' homes or in a coffee shop. Young adult frequent cannabis users (n = 106) from the Dutch Cannabis Dependence (CanDep) study. The objectively measured amount of cannabis per joint (dose in grams) was compared with self-reported estimates using a prompt card and average number of joints made from 1 g of cannabis. In addition, objectively assessed THC concentration in the participant's cannabis was compared with self-reported level of intoxication, subjective estimate of cannabis potency and price per gram of cannabis. Objective estimates of doses per joint (0.07-0.88 g/joint) and cannabis potency (1.1-24.7%) varied widely. Self-reported measures of dose were imprecise, but at group level, average dose per joint was estimated accurately with the number of joints made from 1 g [limit of agreement (LOA) = -0.02 g, 95% confidence interval (CI) = -0.29; 0.26], whereas the prompt card resulted in serious underestimation (LOA = 0.14 g, 95% CI = -0.10; 0.37). THC concentration in cannabis was associated with subjective potency ['average' 3.77% (P = 0.002) and '(very) strong' 5.13% more THC (P < 0.001) than '(very) mild' cannabis] and with cannabis price (about 1% increase in THC concentration per euro spent on 1 g of cannabis, P < 0.001), but not with level of intoxication. Self-report measures relating to cannabis use appear at best to be associated weakly with objective measures. Of the self-report measures, number of joints per gram, cannabis price and subjective potency have at least some validity. © 2013 Society for the Study of Addiction.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Wallace, Dorothy; Prosper, Olivia; Savos, Jacob; Dunham, Ann M; Chipman, Jonathan W; Shi, Xun; Ndenga, Bryson; Githeko, Andrew
2017-03-01
A dynamical model of Anopheles gambiae larval and adult populations is constructed that matches temperature-dependent maturation times and mortality measured experimentally as well as larval instar and adult mosquito emergence data from field studies in the Kenya Highlands. Spectral classification of high-resolution satellite imagery is used to estimate household density. Indoor resting densities collected over a period of one year combined with predictions of the dynamical model give estimates of both aquatic habitat and total adult mosquito densities. Temperature and precipitation patterns are derived from monthly records. Precipitation patterns are compared with average and extreme habitat estimates to estimate available aquatic habitat in an annual cycle. These estimates are coupled with the original model to produce estimates of adult and larval populations dependent on changing aquatic carrying capacity for larvae and changing maturation and mortality dependent on temperature. This paper offers a general method for estimating the total area of aquatic habitat in a given region, based on larval counts, emergence rates, indoor resting density data, and number of households.Altering the average daily temperature and the average daily rainfall simulates the effect of climate change on annual cycles of prevalence of An. gambiae adults. We show that small increases in average annual temperature have a large impact on adult mosquito density, whether measured at model equilibrium values for a single square meter of habitat or tracked over the course of a year of varying habitat availability and temperature. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design, Monitoring, and Validation of a High Performance Sustainable Building
2013-08-01
normalized by building average population and square footage. Longstreet had a larger number of emergency calls than CESS, but the ESTCP team did not have...system was measured sUSEPArately from the domestic water use. The rainwater was used to displace potable water for toilet flushing and vehicle washing...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for
Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake
Thywissen, K.; Boatwright, J.
1998-01-01
We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.
Accident rates for novice glider pilots vs. pilots with experience.
Jarvis, Steve; Harris, Don
2007-12-01
It is a popular notion in gliding that newly soloed pilots have a low accident rate. The intention of this study was to review the support for such a hypothesis from literature and to explore it using UK accident totals and measures of flying exposure. Log sheets from UK gliding clubs were used to estimate flying exposure for inexperienced glider pilots. This was used along with accident data and annual flight statistics for the period 2004-2006 in order to estimate accident rates that could be compared between the pilot groups. The UK accident rate for glider pilots from 2004-2006 was 1 accident in every 3534 launches and 1590 flying hours. The lowest estimated rate for pilots with up to 1 h of experience was 1 accident every 976 launches and 149 h flown. For pilots with up to 10 h of experience the figures were 1 accident in 1274 launches and 503 h. From 2004-2006 UK glider pilots with 10 h or less experience in command had twice the number of accidents per launch and three times as many accidents per hour flown than average for UK glider pilots. Pilots with only 1 h of experience or less were involved in at least 10 times the number of accidents per hour flown than the UK average and had more than 3.5 times the number of accidents per launch.
Ultrafast Spectroscopy of Chromophores
2012-01-31
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 ...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES... 1
Bats adjust their pulse emission rates with swarm size in the field.
Lin, Yuan; Abaid, Nicole; Müller, Rolf
2016-12-01
Flying in swarms, e.g., when exiting a cave, could pose a problem to bats that use an active biosonar system because the animals could risk jamming each other's biosonar signals. Studies from current literature have found different results with regard to whether bats reduce or increase emission rate in the presence of jamming ultrasound. In the present work, the number of Eastern bent-wing bats (Miniopterus fuliginosus) that were flying inside a cave during emergence was estimated along with the number of signal pulses recorded. Over the range of average bat numbers present in the recording (0 to 14 bats), the average number of detected pulses per bat increased with the average number of bats. The result was interpreted as an indication that the Eastern bent-wing bats increased their emission rate and/or pulse amplitude with swarm size on average. This finding could be explained by the hypothesis that the bats might not suffer from substantial jamming probabilities under the observed density regimes, so jamming might not have been a limiting factor for their emissions. When jamming did occur, the bats could avoid it through changing the pulse amplitude and other pulse properties such as duration or frequency, which has been suggested by other studies. More importantly, the increased biosonar activities may have addressed a collision-avoidance challenge that was posed by the increased swarm size.
Heba, Elhamy R.; Desai, Ajinkya; Zand, Kevin A.; Hamilton, Gavin; Wolfson, Tanya; Schlein, Alexandra N.; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B.; Middleton, Michael S.
2016-01-01
Purpose To determine the accuracy and the effect of possible subject-based confounders of magnitude-based magnetic resonance imaging (MRI) for estimating hepatic proton density fat fraction (PDFF) for different numbers of echoes in adults with known or suspected nonalcoholic fatty liver disease, using MR spectroscopy (MRS) as a reference. Materials and Methods In this retrospective analysis of 506 adults, hepatic PDFF was estimated by unenhanced 3.0T MRI, using right-lobe MRS as reference. Regions of interest placed on source images and on six-echo parametric PDFF maps were colocalized to MRS voxel location. Accuracy using different numbers of echoes was assessed by regression and Bland–Altman analysis; slope, intercept, average bias, and R2 were calculated. The effect of age, sex, and body mass index (BMI) on hepatic PDFF accuracy was investigated using multivariate linear regression analyses. Results MRI closely agreed with MRS for all tested methods. For three- to six-echo methods, slope, regression intercept, average bias, and R2 were 1.01–0.99, 0.11–0.62%, 0.24–0.56%, and 0.981–0.982, respectively. Slope was closest to unity for the five-echo method. The two-echo method was least accurate, underestimating PDFF by an average of 2.93%, compared to an average of 0.23–0.69% for the other methods. Statistically significant but clinically nonmeaningful effects on PDFF error were found for subject BMI (P range: 0.0016 to 0.0783), male sex (P range: 0.015 to 0.037), and no statistically significant effect was found for subject age (P range: 0.18–0.24). Conclusion Hepatic magnitude-based MRI PDFF estimates using three, four, five, and six echoes, and six-echo parametric maps are accurate compared to reference MRS values, and that accuracy is not meaningfully confounded by age, sex, or BMI. PMID:26201284
Estimating neuronal connectivity from axonal and dendritic density fields
van Pelt, Jaap; van Ooyen, Arjen
2013-01-01
Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields. PMID:24324430
High-Fidelity Simulations of Moving and Flexible Airfoils at Low Reynolds Numbers (Postprint)
2010-02-01
1 hour per response, including the time for reviewing instructions, searching existing data sources, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...phased-averaged structures for both values of Reynolds number are found to be in good agreement with the experimental data . Finally, the effect of
Workshop on Robotics Materials
2017-12-28
TELEPHONE NUMBER Nicolaus Correll 611102 c. THIS PAGE The public reporting burden for this collection of information is estimated to average 1 hour per...Period: 12-Sep-2017 Submitted By: Ph.D Nicolaus Correll Phone: (303) 492-2233 STEM Degrees: 4 STEM Participants: 1 RPPR Final Report as of 15-Feb...Transfer: Nothing to Report Report Date: 12-Dec-2017 INVESTIGATOR(S): Phone Number: 3034922233 Principal: Y Name: Ph.D Nicolaus Correll Email
Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data
2015-12-01
PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for public ...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...Maryland 21702-5012 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved for Public Release; Distribution
Fluorescent Lamp Replacement Study
2017-07-01
DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM...Unclassified 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 TABLE OF CONTENTS
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
Effectiveness of Acupressure Treatment for Pain Management and Fatigue Relief in Gulf War Veterans
2017-12-01
AWARD NUMBER: W81XWH-12-1-0567 TITLE: Effectiveness of Acupressure Treatment for Pain Management and Fatigue Relief in Gulf War Veterans...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time...of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for
Development of a Tetrathioether (S4) Bifunctional Chelate System for Rh-105
2013-07-01
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and...maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY
Development of a Tetrathioether (S4) Bifunctional Chelate System for Rh-105
2013-06-01
is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and...maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) June
The Shock and Vibration Digest. Volume 13, Number 12
1981-12-01
Resulting Unsteady Forces and Flow Phenomenon. Part III 26 BOOK REVIEWS STATISTICAL ENERGY ANALYSIS Chapter IV considers the problems of estimating J OF...stress, acceleration, modes. Statistical energy analysis (SEA), which is and pressure; estimations of the average system expressed in terms of random...by F.C. Nelson, SVD, 13 (8), pp 30-31 (Aug 1981) Lyons, R.H., Statistical Energy Analysis of Dynamic Systems, MIT Press, Cambridge, MA; Revieed by H
Use of the SONET score to evaluate Urgent Care Center overcrowding: a prospective pilot study.
Wang, Hao; Robinson, Richard D; Cowden, Chad D; Gorman, Violet A; Cook, Christopher D; Gicheru, Eugene K; Schrader, Chet D; Jayswal, Rani D; Zenarosa, Nestor R
2015-04-14
To derive a tool to determine Urgent Care Center (UCC) crowding and investigate the association between different levels of UCC overcrowding and negative patient care outcomes. Prospective pilot study. Single centre study in the USA. 3565 patients who registered at UCC during the 21-day study period were included. Patients who had no overcrowding statuses estimated due to incomplete collection of operational variables at the time of registration were excluded in this study. 3139 patients were enrolled in the final data analysis. A crowding estimation tool (SONET: Severely overcrowded, Overcrowded and Not overcrowded Estimation Tool) was derived using the linear regression analysis. The average length of stay (LOS) in UCC patients and the number of left without being seen (LWBS) patients were calculated and compared under the three different levels of UCC crowding. Four independent operational variables could affect the UCC overcrowding score including the total number of patients, the number of results pending for patients, the number of patients in the waiting room and the longest time a patient was stationed in the waiting room. In addition, UCC overcrowding was associated with longer average LOS (not overcrowded: 133±76 min, overcrowded: 169±79 min, and severely overcrowded: 196±87 min, p<0.001) and an increased number of LWBS patients (not overcrowded: 0.28±0.69 patients, overcrowded: 0.64±0.98, and severely overcrowded: 1.00±0.97). The overcrowding estimation tool (SONET) derived in this study might be used to determine different levels of crowding in a high volume UCC setting. It also showed that UCC overcrowding might be associated with negative patient care outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Average probability that a "cold hit" in a DNA database search results in an erroneous attribution.
Song, Yun S; Patil, Anand; Murphy, Erin E; Slatkin, Montgomery
2009-01-01
We consider a hypothetical series of cases in which the DNA profile of a crime-scene sample is found to match a known profile in a DNA database (i.e., a "cold hit"), resulting in the identification of a suspect based only on genetic evidence. We show that the average probability that there is another person in the population whose profile matches the crime-scene sample but who is not in the database is approximately 2(N - d)p(A), where N is the number of individuals in the population, d is the number of profiles in the database, and p(A) is the average match probability (AMP) for the population. The AMP is estimated by computing the average of the probabilities that two individuals in the population have the same profile. We show further that if a priori each individual in the population is equally likely to have left the crime-scene sample, then the average probability that the database search attributes the crime-scene sample to a wrong person is (N - d)p(A).
Contributing factors to vehicle to vehicle crash frequency and severity under rainfall.
Jung, Soyoung; Jang, Kitae; Yoon, Yoonjin; Kang, Sanghyeok
2014-09-01
This study combined vehicle to vehicle crash frequency and severity estimations to examine factor impacts on Wisconsin highway safety in rainy weather. Because of data deficiency, the real-time water film depth, the car-following distance, and the vertical curve grade were estimated with available data sources and a GIS analysis to capture rainy weather conditions at the crash location and time. Using a negative binomial regression for crash frequency estimation, the average annual daily traffic per lane, the interaction between the posted speed limit change and the existence of an off-ramp, and the interaction between the travel lane number change and the pavement surface material change were found to increase the likelihood of vehicle to vehicle crashes under rainfall. However, more average daily rainfall per month and a wider left shoulder were identified as factors that decrease the likelihood of vehicle to vehicle crashes. In the crash severity estimation using the multinomial logit model that outperformed the ordered logit model, the travel lane number, the interaction between the travel lane number and the slow grade, the deep water film, and the rear-end collision type were more likely to increase the likelihood of injury crashes under rainfall compared with crashes involving only property damage. As an exploratory data analysis, this study provides insight into potential strategies for rainy weather highway safety improvement, specifically, the following weather-sensitive strategies: road design and ITS implementation for drivers' safety awareness under rainfall. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Ching, P; Birmingham, M; Goodman, T; Sutter, R; Loevinsohn, B
2000-01-01
Country-specific activity and coverage data were used to estimate the childhood mortality impact (deaths averted) and costs of integrating vitamin A supplements into immunization campaigns conducted in 1998 and 1999. More than 94 million doses of vitamin A were administered in 41 countries in 1998, helping to avert nearly 169,000 deaths. During 1999, delivery of more than 97 million doses in 50 countries helped avert an estimated 242,000 deaths. The estimated incremental cost per death averted was US$72 (range: 36-142) in 1998 and US$64 (range: 32-126) in 1999. The estimated average total cost of providing supplementation per death averted was US$310 (range: 157-609) in 1998 and US$276 (range: 139-540) in 1999. Costs per death averted varied by campaign, depending on the number and proportion of the child population reached, number of doses received per child, and child mortality rates. PMID:11029982
76 FR 50186 - Submission for OMB Emergency Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... (ICR) to the Office of Management and Budget (OMB) for review and clearance in accordance with the.... Agency Number: None. Affected Public: Nonprofit organizations and congregations. Total Respondents: 600. Frequency: One time. Average Time per Response: 40 hours. Estimated Total Burden Hours: 24,000 hours. Total...
Letting Your Students "Fly" in the Classroom.
ERIC Educational Resources Information Center
Adams, Thomas
1997-01-01
Students investigate the concept of motion by making simple paper airplanes and flying them in the classroom. Students are introduced to conversion factors to calculate various speeds. Additional activities include rounding decimal numbers, estimating, finding averages, making bar graphs, and solving problems. Offers ideas for extension such as…
77 FR 59225 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
...-17) requires approximately 477 registered transfer agents to conduct searches using third party... agencies with monitoring transfer agents and ensuring compliance with the rule. We estimate that the average number of hours necessary for each transfer agent to comply with Rule 17Ad-17 is five hours...
78 FR 54722 - Reports, Forms and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-05
... submission requesting confidential treatment. This estimate will vary based on the size of the submission, with smaller and voluntary submissions taking considerably less time to prepare. The agency based this... approximately 460 requests for confidential treatment annually. This figure is based on the average number of...
Two-mode bosonic quantum metrology with number fluctuations
NASA Astrophysics Data System (ADS)
De Pasquale, Antonella; Facchi, Paolo; Florio, Giuseppe; Giovannetti, Vittorio; Matsuoka, Koji; Yuasa, Kazuya
2015-10-01
We search for the optimal quantum pure states of identical bosonic particles for applications in quantum metrology, in particular, in the estimation of a single parameter for the generic two-mode interferometric setup. We consider the general case in which the total number of particles is fluctuating around an average N with variance Δ N2 . By recasting the problem in the framework of classical probability, we clarify the maximal accuracy attainable and show that it is always larger than the one reachable with a fixed number of particles (i.e., Δ N =0 ). In particular, for larger fluctuations, the error in the estimation diminishes proportionally to 1 /Δ N , below the Heisenberg-like scaling 1 /N . We also clarify the best input state, which is a quasi-NOON state for a generic setup and, for some special cases, a two-mode Schrödinger-cat state with a vacuum component. In addition, we search for the best state within the class of pure Gaussian states with a given average N , which is revealed to be a product state (with no entanglement) with a squeezed vacuum in one mode and the vacuum in the other.
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loranger, S.; Houde, L.; Schetagne, R.
1995-12-31
Hydro-Quebec is planning to build two hydroelectric reservoirs in the upper Saint-Maurice River, which would flood about 80% of the surrounding area. The methylmercury (MeHg) content in freshwater fish will therefore tend to increase during the first few years. This development will have a direct impact on the amount of MeHg that the actual users of this river section are exposed to. The objective of this study is to assess the consumption of local fish of these target groups using a Monte-Carlo approach. This study is part of a larger research project aimed at assessing human exposure and the healthmore » risks related to MeHg contamination in local fish. The fish consumption rate for recreational freshwater anglers was calculated using the duration of the average annual fishing trip, the average number of catches per species, the average fish weight per species exceeding a specific length of fish usually caught, and the edible portion of fish consumed. This rate was calculated for the native communities based on the total number of meals per year per species, the average fish weight per species, and the edible portion. Based on these calculations, average intake for sport fishermen is estimated at 6.9 g/day (sd = 6.4). This value is 5 to 25 times lower on average than for other North American native communities. However, it must be pointed out that the food habits of the native population were very similar to those of non-native populations; less than 30% of the food comes from traditional sources.« less
Truong, Q T; Nguyen, Q V; Truong, V T; Park, H C; Byun, D Y; Goo, N S
2011-09-01
We present an unsteady blade element theory (BET) model to estimate the aerodynamic forces produced by a freely flying beetle and a beetle-mimicking flapping wing system. Added mass and rotational forces are included to accommodate the unsteady force. In addition to the aerodynamic forces needed to accurately estimate the time history of the forces, the inertial forces of the wings are also calculated. All of the force components are considered based on the full three-dimensional (3D) motion of the wing. The result obtained by the present BET model is validated with the data which were presented in a reference paper. The difference between the averages of the estimated forces (lift and drag) and the measured forces in the reference is about 5.7%. The BET model is also used to estimate the force produced by a freely flying beetle and a beetle-mimicking flapping wing system. The wing kinematics used in the BET calculation of a real beetle and the flapping wing system are captured using high-speed cameras. The results show that the average estimated vertical force of the beetle is reasonably close to the weight of the beetle, and the average estimated thrust of the beetle-mimicking flapping wing system is in good agreement with the measured value. Our results show that the unsteady lift and drag coefficients measured by Dickinson et al are still useful for relatively higher Reynolds number cases, and the proposed BET can be a good way to estimate the force produced by a flapping wing system.
An Analysis of Image Segmentation Time in Beam’s-Eye-View Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chun; Spelbring, D.R.; Chen, George T.Y.
In this work we tabulate and histogram the image segmentation time for beam’s eye view (BEV) treatment planning in our center. The average time needed to generate contours on CT images delineating normal structures and treatment target volumes is calculated using a data base containing over 500 patients’ BEV plans. The average number of contours and total image segmentation time needed for BEV plans in three common treatment sites, namely, head/neck, lung/chest, and prostate, were estimated.
ERIC Educational Resources Information Center
Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew
2002-01-01
Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Data challenges in estimating the capacity value of solar photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Data Challenges in Estimating the Capacity Value of Solar Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Data challenges in estimating the capacity value of solar photovoltaics
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
2017-04-30
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Shum, Mona; Kelsh, Michael A; Sheppard, Asher R; Zhao, Ke
2011-01-01
Most epidemiologic studies of potential health impacts of mobile phones rely on self-reported information, which can lead to exposure misclassification. We compared self-reported questionnaire data among 60 participants, and phone billing records over a 3-year period (2002-2004). Phone usage information was compared by the calculation of the mean and median number of calls and duration of use, as well as correlation coefficients and associated P-values. Average call duration from self-reports was slightly lower than billing records (2.1 min vs. 2.8 min, P = 0.01). Participants reported a higher number of average daily calls than billing records (7.9 vs. 4.1, P = 0.002). Correlation coefficients for average minutes per day of mobile phone use and average number of calls per day were relatively high (R = 0.71 and 0.69, respectively, P < 0.001). Information reported at the monthly level tended to be more accurate than estimates of weekly or daily use. Our findings of modest correlations between self-reported mobile phone usage and billing records and substantial variability in recall are consistent with previous studies. However, the direction of over- and under-reporting was not consistent with previous research. We did not observe increased variability over longer periods of recall or a pattern of lower accuracy among older age groups compared with younger groups. Study limitations included a relatively small sample size, low participation rates, and potential limited generalizability. The variability within studies and non-uniformity across studies indicates that estimation of the frequency and duration of phone use by questionnaires should be supplemented with subscriber records whenever practical. © 2010 Wiley-Liss, Inc.
Divisibility patterns of natural numbers on a complex network.
Shekatkar, Snehal M; Bhagwat, Chandrasheel; Ambika, G
2015-09-16
Investigation of divisibility properties of natural numbers is one of the most important themes in the theory of numbers. Various tools have been developed over the centuries to discover and study the various patterns in the sequence of natural numbers in the context of divisibility. In the present paper, we study the divisibility of natural numbers using the framework of a growing complex network. In particular, using tools from the field of statistical inference, we show that the network is scale-free but has a non-stationary degree distribution. Along with this, we report a new kind of similarity pattern for the local clustering, which we call "stretching similarity", in this network. We also show that the various characteristics like average degree, global clustering coefficient and assortativity coefficient of the network vary smoothly with the size of the network. Using analytical arguments we estimate the asymptotic behavior of global clustering and average degree which is validated using numerical analysis.
NASA Astrophysics Data System (ADS)
Loomis, John; McTernan, James
2014-03-01
Whitewater river kayaking and river rafting require adequate instream flows that are often adversely affected by upstream water diversions. However, there are very few studies in the USA of the economic value of instream flow to inform environmental managers. This study estimates the economic value of instream flow to non-commercial kayakers derived using a Travel Cost Method recreation demand model and Contingent Valuation Method (CVM), a type of Contingent Behavior Method (CBM). Data were obtained from a visitor survey administered along the Poudre River in Colorado. In the dichotomous choice CVM willingness to pay (WTP) question, visitors were asked if they would still visit the river if the cost of their trip was Y higher, and the level of Y was varied across the sample. The CVM yielded an estimate of WTP that was sensitive to flows ranging from 55 per person per day at 300 Cubic Feet per Second (CFS) to a maximum 97 per person per day at flows of 1900 CFS. The recreation demand model estimated a boater's number of trips per season. We found the number of trips taken was also sensitive to flow, ranging from as little as 1.63 trips at 300 CFS to a maximum number of 14 trips over the season at 1900 CFS. Thus, there is consistency between peak benefits per trip and number of trips, respectively. With an average of about 100 non-commercial boaters per day, the maximum marginal values per acre foot averages about 220. This value exceeds irrigation water values in this area of Colorado.
Garrett, John D.; Fear, Elise C.
2015-01-01
Prior information about the average dielectric properties of breast tissue can be implemented in microwave breast imaging techniques to improve the results. Rapidly providing this information relies on acquiring a limited number of measurements and processing these measurement with efficient algorithms. Previously, systems were developed to measure the transmission of microwave signals through breast tissue, and simplifications were applied to estimate the average properties. These methods provided reasonable estimates, but they were sensitive to multipath. In this paper, a new technique to analyze the average properties of breast tissues while addressing multipath is presented. Three steps are used to process transmission measurements. First, the effects of multipath were removed. In cases where multipath is present, multiple peaks were observed in the time domain. A Tukey window was used to time-gate a single peak and, therefore, select a single path through the breast. Second, the antenna response was deconvolved from the transmission coefficient to isolate the response from the tissue in the breast interior. The antenna response was determined through simulations. Finally, the complex permittivity was estimated using an iterative approach. This technique was validated using simulated and physical homogeneous breast models and tested with results taken from a recent patient study. PMID:25585106
Deshmukh, Ashish A; Zhao, Hui; Franzini, Luisa; Lairson, David R; Chiao, Elizabeth Y; Das, Prajnan; Swartz, Michael D; Giordano, Sharon H; Cantor, Scott B
2018-02-01
To determine the lifetime and phase-specific cost of anal cancer management and the economic burden of anal cancer care in elderly (66 y and older) patients in the United States. For this study, we used Surveillance Epidemiology and End Results-Medicare linked database (1992 to 2009). We matched newly diagnosed anal cancer patients (by age and sex) to noncancer controls. We estimated survival time from the date of diagnosis until death. Lifetime and average annual cost by stage and age at diagnosis were estimated by combining survival data with Medicare claims. The average lifetime cost, proportion of patients who were elderly, and the number of incident cases were used to estimate the economic burden. The average lifetime cost for patients with anal cancer was US$50,150 (N=2227) (2014 US dollars). The average annual cost in men and women was US$8025 and US$5124, respectively. The overall survival after the diagnosis of cancer was 8.42 years. As the age and stage at diagnosis increased, so did the cost of cancer-related care. The anal cancer-related lifetime economic burden in Medicare patients in the United States was US$112 million. Although the prevalence of anal cancer among the elderly in the United States is small, its economic burden is considerable.
Population dynamics of the Concho water snake in rivers and reservoirs
Whiting, M.J.; Dixon, J.R.; Greene, B.D.; Mueller, J.M.; Thornton, O.W.; Hatfield, J.S.; Nichols, J.D.; Hines, J.E.
2008-01-01
The Concho Water Snake (Nerodia harteri paucimaculata) is confined to the Concho–Colorado River valley of central Texas, thereby occupying one of the smallest geographic ranges of any North American snake. In 1986, N. h. paucimaculata was designated as a federally threatened species, in large part because of reservoir projects that were perceived to adversely affect the amount of habitat available to the snake. During a ten-year period (1987–1996), we conducted capture–recapture field studies to assess dynamics of five subpopulations of snakes in both natural (river) and man-made (reservoir) habitats. Because of differential sampling of subpopulations, we present separate results for all five subpopulations combined (including large reservoirs) and three of the five subpopulations (excluding large reservoirs). We used multistate capture–recapture models to deal with stochastic transitions between pre-reproductive and reproductive size classes and to allow for the possibility of different survival and capture probabilities for the two classes. We also estimated both the finite rate of increase (λ) for a deterministic, stage-based, female-only matrix model using the average litter size, and the average rate of adult population change, λ ˆ, which describes changes in numbers of adult snakes, using a direct capture–recapture approach to estimation. Average annual adult survival was about 0.23 and similar for males and females. Average annual survival for subadults was about 0.14. The parameter estimates from the stage-based projection matrix analysis all yielded asymptotic values of λ < 1, suggesting populations that are not viable. However, the direct estimates of average adult λ for the three subpopulations excluding major reservoirs were λ ˆ = 1.26, SE ˆ(λ ˆ) = 0.18 and λ ˆ = 0.99, SE ˆ(λ ˆ) = 0.79, based on two different models. Thus, the direct estimation approach did not provide strong evidence of population declines of the riverine subpopulations, but the estimates are characterized by substantial uncertainty.
NASA Technical Reports Server (NTRS)
Golub, L.; Krieger, A. S.; Vaiana, G. S.
1976-01-01
Observations of X-ray bright points (XBP) over a six-month interval in 1973 show significant variations in both the number density of XBP as a function of heliographic longitude and in the full-sun average number of XBP from one rotation to the next. The observed increases in XBP emergence are estimated to be equivalent to several large active regions emerging per day for several months. The number of XBP emerging at high latitudes varies in phase with the low-latitude variation and reaches a maximum approximately simultaneous with a major outbreak of active regions. The quantity of magnetic flux emerging in the form of XBP at high latitudes alone is estimated to be as large as the contribution from all active regions.
A Broadband Microwave Radiometer Technique at X-band for Rain and Drop Size Distribution Estimation
NASA Technical Reports Server (NTRS)
Meneghini, R.
2005-01-01
Radiometric brightess temperatures below about 12 GHz provide accurate estimates of path attenuation through precipitation and cloud water. Multiple brightness temperature measurements at X-band frequencies can be used to estimate rainfall rate and parameters of the drop size distribution once correction for cloud water attenuation is made. Employing a stratiform storm model, calculations of the brightness temperatures at 9.5, 10 and 12 GHz are used to simulate estimates of path-averaged median mass diameter, number concentration and rainfall rate. The results indicate that reasonably accurate estimates of rainfall rate and information on the drop size distribution can be derived over ocean under low to moderate wind speed conditions.
Competition in the Dutch hospital sector: an analysis of health care volume and cost.
Krabbe-Alkemade, Y J F M; Groot, T L C M; Lindeboom, M
2017-03-01
This paper evaluates the impact of market competition on health care volume and cost. At the start of 2005, the financing system of Dutch hospitals started to be gradually changed from a closed-end budgeting system to a non-regulated price competitive prospective reimbursement system. The gradual implementation of price competition is a 'natural experiment' that provides a unique opportunity to analyze the effects of market competition on hospital behavior. We have access to a unique database, which contains hospital discharge data of diagnosis treatment combinations (DBCs) of individual patients, including detailed care activities. Difference-in-difference estimates show that the implementation of market-based competition leads to relatively lower total costs, production volume and number of activities overall. Difference-in-difference estimates on treatment level show that the average costs for outpatient DBCs decreased due to a decrease in the number of activities per DBC. The introduction of market competition led to an increase of average costs of inpatient DBCs. Since both volume and number of activities have not changed significantly, we conclude that the cost increase is likely the result of more expensive activities. A possible explanation for our finding is that hospitals look for possible efficiency improvements in predominantly outpatient care products that are relatively straightforward, using easily analyzable technologies. The effects of competition on average cost and the relative shares of inpatient and outpatient treatments on specialty level are significant but contrary for cardiology and orthopedics, suggesting that specialties react differently to competitive incentives.
An Analysis of the Navy’s Fiscal Year 2012 Shipbuilding Plan
2011-06-01
Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE JUN 2011 2...Raymond Hall of CBO’s Budget Analysis Division produced the cost estimates under the general supervision of Sarah Jennings . Bernard Kempinski
Defense Acquisition Research Journal. Volume 18, Number 2, Issue 58, April 2011
2011-04-01
submit your manuscript with references in APA format (author- date-page number form of citation) as outlined in the Publication Manual of the American...Psychological Association ( 6th Edition ). For all other style questions, please refer to the Chicago Manual of Style (15th Edition ). Contributors are...Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour
Defense AR Journal. Volume 18, Number 1, Issue 57
2011-01-01
Manual of the American Psychological Association ( 6th Edition ). For all other style questions, please refer to the Chicago Manual of Style (15th Edition ...112 Format Please submit your manuscript with references in APA format (author- date-page number form of citation) as outlined in the Publication ...Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour
Leveraging Technology to Support the Army’s Net Zero Installation Initiative
2012-05-23
No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES...350,000 400,000 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 Biomass Wind Geothermal PV Solar Hot Water Propane Electrical Grid Net Zero Water
Assessment of Multiple Scattering Errors of Laser Diffraction Instruments
2003-03-17
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DO-MM-YYYY) 2. REPORT TYPE 3...Benner a. REPORT b. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER A (include area code) Unclassified Unclassified Unclassified 1 (661) 275-5693 Standard Form
Regulation of Prostate Cancer Bone Metastasis by DKK1
2012-09-01
information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources, gathering and...ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U b. ABSTRACT U c. THIS PAGE U UU 13 19b. TELEPHONE NUMBER...cells into bone irrevocably alters the bone microenvironment and initiates a skeletal response that is dependent on the type of tumor (1). Breast
2011-02-01
Research Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average...REPORT DATE FEB 2011 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Hydraulic Tomography and High-Resolution Slug Testing to...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Kansas Center for Research 8. PERFORMING
2013-01-01
Wei1, ZHANG Xuefeng1 1 Key Laboratory of State Oceanic Adminstration for Marine Environmental Information Technology, NationalMarine Data and...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES
Electronic Structure Methods Based on Density Functional Theory
2010-01-01
0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves
2016-10-01
DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited The views, opinions and/or findings contained in this report are those of the...documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...5012 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13
NASA Astrophysics Data System (ADS)
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be €194 (range €116 to €448). This information is of use in the quest to improve government policy and PNPG management and conservation as well as promote nature-based tourism. To our knowledge, this is the first attempt to measure the average recreation net benefits of each day of stay generated by a national park by using truncated altered and truncated grouped count-data travel-cost models based on observing the individual number of days of stay.
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be
Continuously Available Battlefield Surveillance
2007-04-01
the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...display a currently valid OMB control number. 1 . REPORT DATE APR 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND... 1 The Thesis of this Report
Fiber Optic Biosensors for Contaminant Monitoring
2005-12-01
collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...currently valid OMB control number 1 . REPORT DATE DEC 2005 2. REPORT TYPE 3. DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE... 1 Executive Summary
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-16
.... 2900- 0111'' in any correspondence. FOR FURTHER INFORMATION CONTACT: Crystal Rennie, Enterprise Records...-7492 or email crystal[email protected] . Please refer to ``OMB Control No. 2900-0111.'' SUPPLEMENTARY... Average Burden Per Respondent: 15 minutes. Frequency of Response: One-time. Estimated Number of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-20
... Control No. 2900- 0216'' in any correspondence. FOR FURTHER INFORMATION CONTACT: Crystal Rennie... 20420, (202) 632-7492 or email crystal[email protected] . Please refer to ``OMB Control No. 2900-0216... Average Burden per Respondent: 30 minutes. Frequency of Response: One time. Estimated Number of...
Estimating Cross-Site Impact Variation in the Presence of Heteroscedasticity
ERIC Educational Resources Information Center
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen
2013-01-01
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
.... The information collected will serve two major purposes. First, as formative research it will provide... communication campaigns. Such knowledge will provide the needed target audience understanding to design... information as follows: Table 1--Estimated Annual Reporting Burden\\1\\ Number of Average burden Survey type...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... materials in the Federal Register. Type of Respondents: Intrastate natural gas and Hinshaw pipelines...: Title: Quarterly Transportation and Storage Report for Intrastate Natural Gas and Hinshaw Pipelines. OMB... Report for Intrastate Natural Gas and Hinshaw Pipelines Number of Average burden Estimated Format of...
Doughty, M J; Müller, A; Zaman, M L
2000-03-01
We sought to determine the variance in endothelial cell density (ECD) estimates for human corneal endothelia. Noncontact specular micrographs were obtained from white subjects without any history of contact lens wear, or major eye disease or surgery; subjects were within four age groups (children, young adults, older adults, senior citizens). The endothelial image was scanned, and the areas from > or =75 cells measured from an overlay by planimetry. The cell-area values were used to calculate the ECD repeatedly so that the intra- and intersubject variation in an average ECD estimate could be made by using different numbers of cells (5, 10, 15, etc.). An average ECD of 3,519 cells/mm2 (range, 2,598-5,312 cells/mm2) was obtained of counts of 75 cells/ endothelium from individuals aged 6-83 years. Average ECD estimates in each age group were 4,124, 3,457, 3,360, and 3,113 cells/mm2, respectively. Analysis of intersubject variance revealed that ECD estimates would be expected to be no better than +/-10% if only 25 cells were measured per endothelium, but approach +/-2% if 75 cells are measured. In assessing the corneal endothelium by noncontact specular microscopy, cell count should be given, and this should be > or =75/ endothelium for an expected variance to be at a level close to that recommended for monitoring age-, stress-, or surgery-related changes.
The problem of seeing hospital staff without G.P. referral--the otolaryngological experience.
Walshe, P; McGrain, S; Colgan, G; McShane, D
2001-01-01
Over a three month period, a record was kept of the number of hospital staff who approached the E.N.T. team requesting help for a medical problem. Staff members included doctors, nurses, clerical staff, paramedical staff and porters. The total number of employees in the hospital was recorded. The average General practitioner public patient list (General medical Service cardholders) for South Dublin was recorded (our hospital is in south west Dublin). The total number of hospital staff seen by E.N.T. in 3 months was seventy seven. The total number of hospital staff seen by other surgical specialties was approximately one hundred and sixty seven. Extrapolation of numbers seen by E.N.T. service in three months to numbers seen over a one year period is 308 patients. The numbers seen by the E.N.T. service in three months corresponds to 11.7% of the average South Dublin General Practitioner Medical card list. It has been estimated that approximately 20% of all problems the average General practitioner sees in a week are E.N.T. related. Those practices with a smaller paediatric population would have approximately 15% of the total practice concearned with E.N.T. problems. Therefore as 15% of 2,400 (total hospital staff) = 360, there is potentially a small General Practice which is 'hidden' within the hospital.
The Spots and Activity of Stars in the Beehive Cluster Observed by the Kepler Space Telescope (K2)
NASA Astrophysics Data System (ADS)
Savanov, I. S.; Kalinicheva, E. S.; Dmitrienko, E. S.
2018-05-01
The spottedness parameters S (the fraction of the visible surface of the star occupied by spots) characterizing the activity of 674 stars in the Beehive Cluster (age 650 Myr) are estimated, together with variations of this parameter as a function of the rotation period, Rossby number Ro and other characteristics of the stars. The activity of the stars in this cluster is lower than the activity of stars in the younger Pleiades (125 Myr). The average S value for the Beehive Cluster stars is 0.014, while Pleiades stars have the much higher average value 0.052. The activity parameters of 61 solar-type stars in the Beehive Cluster, similar Hyades stars (of about the same age), and stars in the younger Pleiades are compared. The average S value of such objects in the Beehive Cluster is 0.014± 0.008, nearly coincident with the estimate obtained for solar-type Hyades stars. The rotation periods of these objects are 9.1 ± 3.4 day, on average, in agreement with the average rotation period of the Hyades stars (8.6 d ). Stars with periods exceeding 3-4 d are more numerous in the Beehive Cluster than in the Pleiades, and their periods have a larger range, 3-30 d . The characteristic dependence with a kink at Ro (saturation) = 0.13 is not observed in the S-Rossby number diagram for the Beehive and Hyades stars, only a clump of objects with Rossby numbers Ro > 0.7. The spottedness data for the Beehive Cluster and Hyades stars are in good agreement with the S values for dwarfs with ages of 600-700 Myr. This provides evidence for the reliability of the results of gyrochronological calibrations. The data for the Beehive and Pleiades stars are used to analyze variations in the spot-forming activity for a large number of stars of the same age that are members of a single cluster. A joint consideration of the data for two clusters can be used to draw conclusions about the time evolution of the activity of stars of different masses (over a time interval of the order of 500 Myr).
The Problem With Estimating Public Health Spending.
Leider, Jonathon P
2016-01-01
Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to meaningfully systematize reporting of public health spending and revenue.
The number of privately treated tuberculosis cases in India: an estimation from drug sales data.
Arinaminpathy, Nimalan; Batra, Deepak; Khaparde, Sunil; Vualnam, Thongsuanmung; Maheshwari, Nilesh; Sharma, Lokesh; Nair, Sreenivas A; Dewan, Puneet
2016-11-01
Understanding the amount of tuberculosis managed by the private sector in India is crucial to understanding the true burden of the disease in the country, and thus globally. In the absence of quality surveillance data on privately treated patients, commercial drug sales data offer an empirical foundation for disease burden estimation. We used a large, nationally representative commercial dataset on sales of 189 anti-tuberculosis products available in India to calculate the amount of anti-tuberculosis treatment in the private sector in 2013-14. We corrected estimates using validation studies that audited prescriptions against tuberculosis diagnosis, and estimated uncertainty using Monte Carlo simulation. To address implications for numbers of patients with tuberculosis, we explored varying assumptions for average duration of tuberculosis treatment and accuracy of private diagnosis. There were 17·793 million patient-months (95% credible interval 16·709 million to 19·841 million) of anti-tuberculosis treatment in the private sector in 2014, twice as many as the public sector. If 40-60% of private-sector tuberculosis diagnoses are correct, and if private-sector tuberculosis treatment lasts on average 2-6 months, this implies that 1·19-5·34 million tuberculosis cases were treated in the private sector in 2014 alone. The midpoint of these ranges yields an estimate of 2·2 million cases, two to three times higher than currently assumed. India's private sector is treating an enormous number of patients for tuberculosis, appreciably higher than has been previously recognised. Accordingly, there is a re-doubled need to address this burden and to strengthen surveillance. Tuberculosis burden estimates in India and worldwide require revision. Bill & Melinda Gates Foundation. Copyright © 2016 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY license. Published by Elsevier Ltd.. All rights reserved.
Image analysis-based modelling for flower number estimation in grapevine.
Millan, Borja; Aquino, Arturo; Diago, Maria P; Tardaguila, Javier
2017-02-01
Grapevine flower number per inflorescence provides valuable information that can be used for assessing yield. Considerable research has been conducted at developing a technological tool, based on image analysis and predictive modelling. However, the behaviour of variety-independent predictive models and yield prediction capabilities on a wide set of varieties has never been evaluated. Inflorescence images from 11 grapevine Vitis vinifera L. varieties were acquired under field conditions. The flower number per inflorescence and the flower number visible in the images were calculated manually, and automatically using an image analysis algorithm. These datasets were used to calibrate and evaluate the behaviour of two linear (single-variable and multivariable) and a nonlinear variety-independent model. As a result, the integrated tool composed of the image analysis algorithm and the nonlinear approach showed the highest performance and robustness (RPD = 8.32, RMSE = 37.1). The yield estimation capabilities of the flower number in conjunction with fruit set rate (R 2 = 0.79) and average berry weight (R 2 = 0.91) were also tested. This study proves the accuracy of flower number per inflorescence estimation using an image analysis algorithm and a nonlinear model that is generally applicable to different grapevine varieties. This provides a fast, non-invasive and reliable tool for estimation of yield at harvest. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Robust estimation of event-related potentials via particle filter.
Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito
2016-03-01
In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Relationships between Perron-Frobenius eigenvalue and measurements of loops in networks
NASA Astrophysics Data System (ADS)
Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Chang, Yizhe
2018-07-01
The Perron-Frobenius eigenvalue (PFE) is widely used as measurement of the number of loops in networks, but what exactly the relationship between the PFE and the number of loops in networks is has not been researched yet, is it strictly monotonically increasing? And what are the relationships between the PFE and other measurements of loops in networks? Such as the average loop degree of nodes, and the distribution of loop ranks. We make researches on these questions based on samples of ER random network, NW small-world network and BA scale-free network, and the results confirm that, both the number of loops in network and the average loop degree of nodes of all samples do increase with the increase of the PFE in general trend, but neither of them are strictly monotonically increasing, so the PFE is capable to be used as a rough estimative measurement of the number of loops in networks and the average loop degree of nodes. Furthermore, we find that a majority of the loop ranks of all samples obey Weibull distribution, of which the scale parameter A and the shape parameter B have approximate power-law relationships with the PFE of the samples.
Lichtenberg, Frank R
2015-04-01
In Slovenia during the period 2000-2010, the number of years of potential life lost before the age of 70 years per 100,000 population under 70 years of age declined 25 %. The aim of this study was to test the hypothesis that pharmaceutical innovation played a key role in reducing premature mortality from all diseases in Slovenia, and to examine the effects of pharmaceutical innovation on the age-standardized number of cancer deaths and on hospitalization from all diseases. Estimates and other data were used to calculate the incremental cost effectiveness of pharmaceutical innovation in Slovenia. Longitudinal disease-level data was analyzed to determine whether diseases for which there was greater pharmaceutical innovation-a larger increase in the number of new chemical entities (NCEs) previously launched-had larger declines in premature mortality, the age-standardized number of cancer deaths, and the number of hospital discharges. My methodology controls for the effects of macroeconomic trends and overall changes in the healthcare system. Premature mortality from a disease is inversely related to the number of NCEs launched more than 5 years earlier. On average, the introduction of an additional NCE for a disease reduced premature mortality from the disease by 2.4 % 7 years later. The age-standardized number of cancer deaths is inversely related to the number of NCEs launched 1-6 years earlier, conditional on the age-standardized number of new cancer cases diagnosed 0-2 years earlier. On average, the launch of an NCE reduced the number of hospital discharges 1 year later by approximately 1.5 %. The estimates imply that approximately two-thirds of the 2000-2010 decline in premature mortality was due to pharmaceutical innovation. If no NCEs had been launched in Slovenia during 1992-2003, the age-standardized number of cancer deaths in 2008 would have been 12.2 % higher. The NCEs launched in Slovenia during 2003-2009 are estimated to have reduced the number of hospital discharges in 2010 by 7 %. If we assume that pharmaceutical expenditure was the only type of expenditure affected by pharmaceutical innovation, the cost per life-year saved was
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
Coatings of black carbon in Tijuana, Mexico, during the CalMex Campaign
NASA Astrophysics Data System (ADS)
Takahama, S.; Russell, L. M.; Duran, R.; Subramanian, R.; Kok, G.
2010-12-01
Black carbon number and mass concentrations were measured by a single-particle soot photometer (SP2; by Droplet Measurement Technologies) in Tijuana, Mexico between May 15, 2010, and June 30, 2010, for the CalMex campaign. The measurement site, Parque Morelos, is a recreational area located in the Southeast region of Tijuana. The SP2 was equipped with 8-channels of signal detection that spans a wider range of sensitivity for incandescing and scattering measurements than traditional configurations. The campaign-average number concentration of incandescing particles was 280 #/cc, peaking during traffic activity in the mornings. Incandescing particles made up 50% of all particles (incandescing and purely scattering) detected by the SP2. The mode of the number size distribution estimated for black carbon, according to estimated mass-equivalent diameters, was approximately 100 nm or smaller. Temporal variations in estimated coating thicknesses for these black carbon particles are discussed together with co-located measurements of organic aerosol and inorganic salts.
Estimation of total bacteria by real-time PCR in patients with periodontal disease.
Brajović, Gavrilo; Popović, Branka; Puletić, Miljan; Kostić, Marija; Milasin, Jelena
2016-01-01
Periodontal diseases are associated with the presence of elevated levels of bacteria within the gingival crevice. The aim of this study was to evaluate a total amount of bacteria in subgingival plaque samples in patients with a periodontal disease. A quantitative evaluation of total bacteria amount using quantitative real-time polymerase chain reaction (qRT-PCR) was performed on 20 samples of patients with ulceronecrotic periodontitis and on 10 samples of healthy subjects. The estimation of total bacterial amount was based on gene copy number for 16S rRNA that was determined by comparing to Ct values/gene copy number of the standard curve. A statistically significant difference between average gene copy number of total bacteria in periodontal patients (2.55 x 10⁷) and healthy control (2.37 x 10⁶) was found (p = 0.01). Also, a trend of higher numbers of the gene copy in deeper periodontal lesions (> 7 mm) was confirmed by a positive value of coefficient of correlation (r = 0.073). The quantitative estimation of total bacteria based on gene copy number could be an important additional tool in diagnosing periodontitis.
The cost of resident scholarly activity and its effect on resident clinical experience.
Schott, Nicholas J; Emerick, Trent D; Metro, David G; Sakai, Tetsuro
2013-11-01
Scholarly activity is an important aspect of the academic training of future anesthesiologists. However, residents' scholarly activity may reduce training caseloads and increase departmental costs. We conducted this study within a large academic anesthesiology residency program with data from the 4 graduating classes of 2009 through 2012. Scholarly activity included peer-reviewed manuscripts, case reports, poster presentations at conferences, book chapters, or any other publications. It was not distinguished whether a resident was the principal investigator or a coinvestigator on a project. The following data were collected on each resident: months spent on a resident research rotation, number of scholarly projects completed, number of research conferences attended, and Accreditation Council for Graduate Medical Education case entries. Comparison was made between residents electing a resident research rotation with those who did not for (1) scholarly projects, (2) research conference attendance, and (3) Accreditation Council for Graduate Medical Education case numbers. Cost to the department for extra clinical coverage during residents' time spent on research activities was calculated using an estimated average cost of $675 ± $176 (mean ± SD) per day with local certified registered nurse anesthetist pay scales. Sixty-eight residents were included in the analyses. Twenty-four residents (35.3%) completed resident research rotations with an average duration of 3.7 months. Residents who elected resident research rotations completed more scholarly projects (5 projects [4-6]: median [25%-75% interquartile range] vs 2 [0-3]; P < 0.0001), attended more research conferences (2 conferences [2-4] vs 1 [0-2]; P < 0.0001), but experienced fewer cases (980 cases [886-1333] vs 1182 [930-1420]; P ≤ 0.002) compared with those who did not elect resident research rotations. The estimated average cost to the department per resident who elected a resident research rotation was $13,500 ± $9724 per month. The average resident time length away from duty for conference attendance was 3.2 ± 0.2 days, with an average cost to the department of $2160 ± $565. The average annual departmental expense for resident conference travel was an additional $1424 ± $133 per resident, as calculated from reimbursement data. Together, the estimated departmental cost for resident scholarly activity during the residency training period was $27,467 ± $20,153 per resident. Residents' scholarly activities require significant departmental financial support. Residents who elected to spend months conducting research completed significantly more scholarly projects but experienced fewer clinical cases.
Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation
NASA Astrophysics Data System (ADS)
Medina, H.; Mutu, R.
2017-07-01
An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.
Translational Regulation of PTEN/MMAC1 Expression in Prostate Cancer
2005-05-01
Ac reporting burden for this collection of information is estimated to average 1 hour per response , including the time for reviewing instructions... responsible for constitutive production of the PTEN mRNAs with shorter 5’-UTRs in prostate cancer cells which would be compatible for cap-dependent...LIMITATION 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON OF ABSTRACT OF PAGES a. REPORT b. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (include area U U U UU
Air Force Civil Engineer, Volume 15, Number 4, 2007, 2007 Almanac
2007-01-01
Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Civil Engineer (AFCESA/PCT),139 Barnes Drive, Suite 1 ,Tyndall AFB,FL,32403-5319 8
2017-12-05
Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response...number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YY) 2. REPORT TYPE 3. DATES COVERED (From - To) 05 December...THIS PAGE Unclassified Thomas Layne 19b. TELEPHONE NUMBER (Include Area Code) (937) 255-2693 Standard Form 298 (Rev. 8-98
Automated Techniques for Rapid Analysis of Momentum Exchange Devices
2013-12-01
estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining...NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N /A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES...1, pg. 12]. It allows a vector expressed in one frame to be differentiated with respect to another frame. /a a N N a a ad dv v v dt dt
Planning with Imperfect Information: Interceptor Assignment
2006-06-01
JULt 0 2006 REPORT DOCUMENTATION PAGE FOMB No.07o4-188 VUblic reporting burden for this collection ot intormation is estimated to average 1 hour per...20503. 1 . AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED I 5.Jul.06 THESIS 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS...205/AFIT Sup 1 Distribution Unli rr ted 13. ABSTRACT (Maximum 200 words) 14. SUBJECT TERMS 15. NUMBER OF PAGES 123 16. PRICE CODE 17. SECURITY
NASA Astrophysics Data System (ADS)
Patil, Vishal; Liburdy, James
2012-11-01
Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.
Fiscal Year 2010 Budget Request. Summary Justification
2009-05-01
FISCAL YEAR 2010 BUDGET REQUEST S U M M A R Y J U S T I F I C A T I O N • M A Y 2 0 0 9 U N I T E D S T A T E S D E P A R T M E N T O F D...E F E N S E Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average...Justification 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7
Risk assessment of aircraft noise on sleep in Montreal.
Tétreault, Louis-Francois; Plante, Céline; Perron, Stéphane; Goudreau, Sophie; King, Norman; Smargiassi, Audrey
2012-05-24
Estimate the number of awakenings additional to spontaneous awakenings, induced by the nighttime aircraft movements at an international airport in Montreal, in the population residing nearby in 2009. Maximum sound levels (LAS,max) were derived from aircraft movements using the Integrated Noise Model 7.0b, on a 28 x 28 km grid centred on the airport and with a 0.1 x 0.1 km resolution. Outdoor LAS,max were converted to indoor LAS,max by reducing noise levels by 15 dB(A) or 21 dB(A). For all grid points, LAS,max were transformed into probabilities of additional awakening using a function developed by Basner et al. (2006). The probabilities of additional awakening were linked to estimated numbers of exposed residents for each grid location to assess the number of aircraft-noise-induced awakenings in Montreal. Using a 15 dB(A) sound attenuation, 590 persons would, on average, have one or more additional awakenings per night for the year 2009. In the scenario using a 21 dB(A) sound attenuation, on average, no one would be subjected to one or more additional awakenings per night due to aircraft noise. Using the 2009 flight patterns, our data suggest that a small number of Montreal residents are exposed to noise levels that could induce one or more awakenings additional to spontaneous awakenings per night.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Junghyun; Hayward, Chris; Zeiler, Cleat
Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less
Industrial accident compensation insurance benefits on cerebrovascular and heart disease in Korea.
Kim, Hyeong Su; Choi, Jae Wook; Chang, Soung Hoon; Lee, Kun Sei
2003-01-01
The purpose of this study is to present the importance of work-related cerebrovascular and heart disease from the viewpoint of expenses. Using the insurance benefit paid for the 4,300 cases, this study estimated the burden of insurance benefits spent on work-related cerebrovascular and heart disease. The number of cases with work-related cerebrovascular and heart disease per 100,000 insured workers were 3.36 in 1995; they were increased to 13.16 in 2000. By the days of occurrence, the estimated number of cases were 1,336 in 2001 (95% CI: 1,211-1,460 cases) and 1,769 in 2005 (CI: 1,610-1,931 cases). The estimated average insurance benefits paid per person with work-related cerebrovascular and heart disease was 75-19 million won for medical care benefit and 56 million won for other benefits except medical care. By considering the increase in insurance payment and average pay, the predicted insurance benefits for work-related cerebrovascular and heart disease was 107.9 billion won for the 2001 cohort and 192.4 billion won for the 2005 cohort. From an economic perspective, the results will be used as important evidence for the prevention and management of work-related cerebrovascular and heart disease. PMID:12923322
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
Karki, Surendra; Newall, Anthony T; MacIntyre, C Raina; Heywood, Anita E; McIntyre, Peter; Banks, Emily; Liu, Bette
2016-01-01
Herpes zoster (HZ) is a common condition that increases in incidence with older age but vaccines are available to prevent the disease. However, there are limited data estimating the health system burden attributable to herpes zoster by age. In this study, we quantified excess healthcare resource usage associated with HZ during the acute/sub-acute period of disease (21days before to 90 days after onset) in 5952 cases and an equal number of controls matched on age, sex, and prior healthcare resource usage. Estimates were adjusted for potential confounders in multivariable regression models. Using population-based estimates of HZ incidence, we calculated the age-specific excess number of health service usage events attributable to HZ in the population. Per HZ case, there was an average of 0.06 (95% CI 0.04-0.08) excess hospitalisations, 1.61 (95% CI 1.51-1.69) excess general practitioner visits, 1.96 (95% CI 1.86-2.15) excess prescriptions filled and 0.11 (95% CI 0.09-0.13) excess emergency department visits. The average number of healthcare resource use events, and the estimated excess per 100,000 population increased with increasing age but were similar for men and women, except for higher rates of hospitalisation in men. The excess annual HZ associated burden of hospitalisations was highest in adults ≥80 years (N = 2244, 95%CI 1719-2767); GP visits was highest in those 60-69 years (N = 50567, 95%CI 39958-61105), prescriptions and ED visits were highest in 70-79 years (N = 50524, 95%CI 40634-60471 and N = 2891, 95%CI 2319-3449 respectively). This study provides important data to establish the healthcare utilisation associated with HZ against which detailed cost-effectiveness analyses of HZ immunisation in older adults can be conducted.
Crowe, Sonya; Utley, Martin; Walker, Guy; Grove, Peter; Pagel, Christina
2011-07-12
A mathematical model has been developed for the purpose of evaluating vaccination against pneumococcus as a countermeasure against pandemic influenza. As the characteristics of a future pandemic cannot be known in advance, three distinct pandemic scenarios were considered, corresponding to a 1918-like pandemic, a 1957/1968-like pandemic and a 2009-like pandemic. Model estimates for each of these pandemic scenarios are presented for two options of vaccination programme; universal vaccination of the entire UK population and vaccination only of those people considered to be at heightened risk of developing influenza complications. We find that the benefits of each option (in terms of estimated number of deaths and hospital admissions avoided and the courses of antibiotics saved) are high in a 1918-like pandemic and very small in a 2009-like pandemic. Given that the decision regarding deployment of the counter measure would occur prior to knowledge of the flu-strain characteristics being available, we also present the weighted average of the outcomes from the three pandemic scenarios. Based on the historical occurrence of pandemics over the last 100 years, the weighted average of outcomes is an estimated 1400 deaths prevented by the universal vaccination option and 400 deaths saved by the targeted vaccination option (at a cost of approximately 400 million and 50 million courses of vaccine respectively). Finally, the longer term implications of using PPV as a countermeasure against pandemic influenza have been considered by estimating the expected number of courses of vaccine bought and the expected number of deaths and hospital admissions prevented over time under each policy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hellwig, B
2000-02-01
This study provides a detailed quantitative estimate for local synaptic connectivity between neocortical pyramidal neurons. A new way of obtaining such an estimate is presented. In acute slices of the rat visual cortex, four layer 2 and four layer 3 pyramidal neurons were intracellularly injected with biocytin. Axonal and dendritic arborizations were three-dimensionally reconstructed with the aid of a computer-based camera lucida system. In a computer experiment, pairs of pre- and postsynaptic neurons were formed and potential synaptic contacts were calculated. For each pair, the calculations were carried out for a whole range of distances (0 to 500 microm) between the presynaptic and the postsynaptic neuron, in order to estimate cortical connectivity as a function of the spatial separation of neurons. It was also differentiated whether neurons were situated in the same or in different cortical layers. The data thus obtained was used to compute connection probabilities, the average number of contacts between neurons, the frequency of specific numbers of contacts and the total number of contacts a dendritic tree receives from the surrounding cortical volume. Connection probabilities ranged from 50% to 80% for directly adjacent neurons and from 0% to 15% for neurons 500 microm apart. In many cases, connections were mediated by one contact only. However, close neighbors made on average up to 3 contacts with each other. The question as to whether the method employed in this study yields a realistic estimate of synaptic connectivity is discussed. It is argued that the results can be used as a detailed blueprint for building artificial neural networks with a cortex-like architecture.
Maximum likelihood estimation of linkage disequilibrium in half-sib families.
Gomez-Raya, L
2012-05-01
Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ=0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium.
Maximum Likelihood Estimation of Linkage Disequilibrium in Half-Sib Families
Gomez-Raya, L.
2012-01-01
Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ = 0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium. PMID:22377635
NASA Astrophysics Data System (ADS)
Wyss, M.
2012-12-01
Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is perplexing: Most fatalities occurred in industrial facilities where few workers are present at 4AM, while the vast majority of the population at home survived. QLARM contains a function modeling the occupancy rate of buildings as a function of the hour of day for residential buildings. The possibility that two-year old industrial plants may collapse and kill workers within a stone's throw from abandoned, old, brick farm houses that do not collapse, as it happened near Sant'Agostino on 20th May 2012, is not considered in QLARM or any loss estimating method. The dismal performance of the many new industrial plants in Emilia Romagna, which collapsed, lost their roofs, or their walls, shows that regional building practices can remain hidden from the world community trying to estimate earthquake risk and lead to surprises and unnecessary fatalities.
Rakhmanin Yu A; Shibanov, S E; Kozulya, S V
2016-01-01
The purpose of work: comparison of prevalence among residents, which use or fail to use to clean split systems. Collected information about morbidity rate in 235 cases people during 3 years. The usage of split-systems without their regular cleaning leads to the gain in the level of the prevalence of respiratory diseases by 172.7% if compared with persons, who have no air conditioning systems at home. Also, the average number of disability days increases by 218.1%) and average time of the duration of the disease increases by 71.9%. The annual treatment of split-systems and regular cleaning of filters allowed to reduce the number of diseases. In comparison with the group of people, who fail to clean air conditioning systems, the drop of morbidity rate by 56.6%, average number of disability days by 63.3% and average time of diseases by 30.9% was observed. Regular treatment of air conditioning systems cannot completely repay the morbidity rates to the level of the control group. In comparison with the people, who use no air conditioning systems, the owners of split-systems with regular treatment have lung diseases by 18.4% more often. The average number of disability days and the average time of diseases increased by 16.9% and 18.8%. These changes can be explained by the impact of unfavorable (cooling) microclimate. The impact of split-systems on the health of the population requires a comprehensive study and the subsequent development of normative documents regulating their safe use.
Radiation Dose and Cancer Risk Estimates in 16-Slice Computed Tomography Coronary Angiography
Einstein, Andrew J.; Sanz, Javier; Dellegrottaglie, Santo; Milite, Margherita; Sirol, Marc; Henzlova, Milena; Rajagopalan, Sanjay
2008-01-01
Background Recent advances have led to a rapid increase in the number of computed tomography coronary angiography (CTCA) studies performed. While several studies have reported effective dose (E), there is no data available on cancer risk for current CTCA protocols. Methods and Results E and organ doses were estimated, using scanner-derived parameters and Monte Carlo methods, for 50 patients having 16-slice CTCA performed for clinical indications. Lifetime attributable risks (LARs) were estimated with models developed in the National Academies’ Biological Effects of Ionizing Radiation VII report. E of a complete CTCA averaged 9.5 mSv, while that of a complete study, including calcium scoring when indicated, averaged 11.7 mSv. Calcium scoring increased E by 25%, while tube current modulation reduced it by 34% and was more effective at lower heart rates. Organ doses were highest to the lungs and female breast. LAR of cancer incidence from CTCA averaged approximately 1 in 1600, but varied widely between patients, being highest in younger women. For all patients, the greatest risk was from lung cancer. Conclusions CTCA is associated with non-negligible risk of malignancy. Doses can be reduced by careful attention to scanning protocol. PMID:18371595
ERIC Educational Resources Information Center
Temple, Viviene A.; Stanish, Heidi I.
2009-01-01
Pedometers are objective, inexpensive, valid, and reliable measures of physical activity. The minimum number of days of pedometer monitoring needed to estimate average weekly step counts was investigated. Seven days of pedometer data were collected from 154 ambulatory men and women ("ns" = 88 and 66, respectively) with intellectual disability.…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-05
... submitted to the Office of Management and Budget (OMB) for review and approval. Proposed Collection: The... Investigators, 1; Trainees, 1; Average burden hours per response: 30 minutes; and Estimated total annual burden hours requested: 250 hours. The total annualized cost to respondents (calculated as the number of...
78 FR 53010 - Agency Information Collection (Statement of Disappearance) Activity Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
... Control No. 2900- 0036'' in any correspondence. FOR FURTHER INFORMATION CONTACT: Crystal Rennie... 20420, (202) 632-7492 or email crystal[email protected] . Please refer to ``OMB Control No. 2900-0036... Average Burden per Respondent: 2 hours 45 minutes. Frequency of Response: One-time. Estimated Number of...
The Field Production of Water for Injection
1985-12-01
L/day Bedridden Patient 0.75 L/day Average Diseased Patient 0.50 L/day e (There is no feasible methodology to forecast the number of procedures per... Bedridden Patient 0.75 All Diseased Patients 0.50 An estimate of the liters/day needed may be calculated based on a forecasted patient stream, including
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... comments, whether submitted electronically or in paper, will be made available for public viewing at www...) and entrainment (where aquatic organisms, eggs, and larvae are taken into the cooling system, passed... Territories. Frequency of response: Annual, every 5 years. Estimated total average number of responses for...
Impaired Driving. Prevention Resource Guide.
ERIC Educational Resources Information Center
Lane, Amy
This booklet focuses on impaired driving. The first section presents 21 facts on impaired driving. These include the number of people who lost their lives in alcohol-related crashes; the leading cause of death for young people; the average amount of alcohol consumed by people arrested for driving under the influence; the estimation that a tax…
Feedback Drug Delivery Vehicles
2012-06-28
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...policy or decision, unless so designated by other documentation. 12. DISTRIBUTION AVAILIBILITY STATEMENT Approved for Public Release; Distribution... publications (other than abstracts): Received Paper TOTAL: Number of Non Peer-Reviewed Conference Proceeding publications (other than abstracts): Peer
76 FR 28246 - Information Collection Requests Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-16
... collection request to the Office of Management and Budget (OMB) for approval of an existing collection in use... characteristics and skills to serve as a Peace Corps Response Volunteer. OMB Control Number: 0420-pending. Title... of reference forms received: 1,000. e. Frequency of response: One time. f. Estimated average time to...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-02
... driving practices when these vehicles are operated. Estimated Annual Burden: 300 hours. Number of... information because of model changes. The light truck manufacturers gather only pre-existing data for the... average of $35.00 per hour for professional and clerical staff to gather data, distribute and print...
Validation of a Formula for Assigning Continuing Education Credit to Printed Home Study Courses
Hanson, Alan L.
2007-01-01
Objectives To reevaluate and validate the use of a formula for calculating the amount of continuing education credit to be awarded for printed home study courses. Methods Ten home study courses were selected for inclusion in a study to validate the formula, which is based on the number of words, number of final examination questions, and estimated difficulty level of the course. The amount of estimated credit calculated using the a priori formula was compared to the average amount of time required to complete each article based on pharmacists' self-reporting. Results A strong positive relationship between the amount of time required to complete the home study courses based on the a priori calculation and the times reported by pharmacists completing the 10 courses was found (p < 0.001). The correlation accounted for 86.2% of the total variability in the average pharmacist reported completion times (p < 0.001). Conclusions The formula offers an efficient and accurate means of determining the amount of continuing education credit that should be assigned to printed home study courses. PMID:19503705
Movie denoising by average of warped lines.
Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro
2007-09-01
Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.
Gavino, V C; Milo, G E; Cornwell, D G
1982-03-01
Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.
Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M
2018-01-01
Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.
Embryotoxic thresholds of mercury: estimates from individual mallard eggs
Heinz, G.H.; Hoffman, D.J.
2003-01-01
Eighty pairs of mallards (Anas platyrhynchos) were fed an uncontaminated diet until each female had laid 15 eggs. After each female had laid her 15th egg, the pair was randomly assigned to a control diet or diets containing 5, 10, or 20 ?g/g mercury as methylmercury until she had laid a second set of 15 eggs. There were 20 pairs in each group. After the second set of 15 eggs, the pair was returned to an uncontaminated diet, and the female was permitted to lay another 30 eggs. For those pairs fed the mercury diets, the even-numbered eggs were incubated and the odd-numbered eggs were saved for possible mercury analysis. Mercury in the even-numbered eggs was estimated as the average of what was in the neighboring odd-numbered eggs. Neurological signs of methylmercury poisoning were observed in ducklings that hatched from eggs containing as little as 2.3 ?g/g estimated mercury on a wet-weight basis, and deformities were seen in embryos from eggs containing about 1 ?g/g estimated mercury. Although embryo mortality was seen in eggs estimated to contain as little as 0.74 ?g/g mercury, there were considerable differences in the sensitivity of mallard embryos, especially from different parents, with some embryos surviving as much as 30 or more ?g/g mercury in the egg.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
Cost of individual peer counselling for the promotion of exclusive breastfeeding in Uganda
2011-01-01
Background Exclusive breastfeeding (EBF) for 6 months is the recommended form of infant feeding. Support of mothers through individual peer counselling has been proved to be effective in increasing exclusive breastfeeding prevalence. We present a costing study of an individual peer support intervention in Uganda, whose objective was to raise exclusive breastfeeding rates at 3 months of age. Methods We costed the peer support intervention, which was offered to 406 breastfeeding mothers in Uganda. The average number of counselling visits was about 6 per woman. Annual financial and economic costs were collected in 2005-2008. Estimates were made of total project costs, average costs per mother counselled and average costs per peer counselling visit. Alternative intervention packages were explored in the sensitivity analysis. We also estimated the resources required to fund the scale up to district level, of a breastfeeding intervention programme within a public health sector model. Results Annual project costs were estimated to be US$56,308. The largest cost component was peer supporter supervision, which accounted for over 50% of total project costs. The cost per mother counselled was US$139 and the cost per visit was US$26. The cost per week of EBF was estimated to be US$15 at 12 weeks post partum. We estimated that implementing an alternative package modelled on routine public health sector programmes can potentially reduce costs by over 60%. Based on the calculated average costs and annual births, scaling up modelled costs to district level would cost the public sector an additional US$1,813,000. Conclusion Exclusive breastfeeding promotion in sub-Saharan Africa is feasible and can be implemented at a sustainable cost. The results of this study can be incorporated in cost effectiveness analyses of exclusive breastfeeding promotion programmes in sub-Saharan Africa. PMID:21714877
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Estimating the cost of informal caregiving for elderly patients with cancer.
Hayman, J A; Langa, K M; Kabeto, M U; Katz, S J; DeMonner, S M; Chernew, M E; Slavin, M B; Fendrick, A M
2001-07-01
As the United States population ages, the increasing prevalence of cancer is likely to result in higher direct medical and nonmedical costs. Although estimates of the associated direct medical costs exist, very little information is available regarding the prevalence, time, and cost associated with informal caregiving for elderly cancer patients. To estimate these costs, we used data from the first wave (1993) of the Asset and Health Dynamics (AHEAD) Study, a nationally representative longitudinal survey of people aged 70 or older. Using a multivariable, two-part regression model to control for differences in health and functional status, social support, and sociodemographics, we estimated the probability of receiving informal care, the average weekly number of caregiving hours, and the average annual caregiving cost per case (assuming an average hourly wage of $8.17) for subjects who reported no history of cancer (NC), having a diagnosis of cancer but not receiving treatment for their cancer in the last year (CNT), and having a diagnosis of cancer and receiving treatment in the last year (CT). Of the 7,443 subjects surveyed, 6,422 (86%) reported NC, 718 (10%) reported CNT, and 303 (4%) reported CT. Whereas the adjusted probability of informal caregiving for those respondents reporting NC and CNT was 26%, it was 34% for those reporting CT (P <.05). Those subjects reporting CT received an average of 10.0 hours of informal caregiving per week, as compared with 6.9 and 6.8 hours for those who reported NC and CNT, respectively (P <.05). Accordingly, cancer treatment was associated with an incremental increase of 3.1 hours per week, which translates into an additional average yearly cost of $1,200 per patient and just over $1 billion nationally. Informal caregiving costs are substantial and should be considered when estimating the cost of cancer treatment in the elderly.
NASA Astrophysics Data System (ADS)
Simon, M.; Dolinar, S.
2005-08-01
A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.
Experimental criteria for the determination of fractal parameters of premixed turbulent flames
NASA Astrophysics Data System (ADS)
Shepherd, I. G.; Cheng, Robert K.; Talbot, L.
1992-10-01
The influence of spatial resolution, digitization noise, the number of records used for averaging, and the method of analysis on the determination of the fractal parameters of a high Damköhler number, methane/air, premixed, turbulent stagnation-point flame are investigated in this paper. The flow exit velocity was 5 m/s and the turbulent Reynolds number was 70 based on a integral scale of 3 mm and a turbulent intensity of 7%. The light source was a copper vapor laser which delivered 20 nsecs, 5 mJ pulses at 4 kHz and the tomographic cross-sections of the flame were recorded by a high speed movie camera. The spatial resolution of the images is 155 × 121 μm/pixel with a field of view of 50 × 65 mm. The stepping caliper technique for obtaining the fractal parameters is found to give the clearest indication of the cutoffs and the effects of noise. It is necessary to ensemble average the results from more than 25 statistically independent images to reduce sufficiently the scatter in the fractal parameters. The effects of reduced spatial resolution on fractal plots are estimated by artificial degradation of the resolution of the digitized flame boundaries. The effect of pixel resolution, an apparent increase in flame length below the inner scale rolloff, appears in the fractal plots when the measurent scale is less than approximately twice the pixel resolution. Although a clearer determination of fractal parameters is obtained by local averaging of the flame boundaries which removes digitization noise, at low spatial resolution this technique can reduce the fractal dimension. The degree of fractal isotropy of the flame surface can have a significant effect on the estimation of the flame surface area and hence burning rate from two-dimensional images. To estimate this isotropy a determination of the outer cutoff is required and three-dimensional measurements are probably also necessary.
Park-Lee, Eunice Y; Decker, Frederic H
2010-11-09
This report presents national estimates of the organizational characteristics of home health and hospice care agencies in 2007. Comparisons of organizational characteristics and provision of selected services are made by agency type. A comparison of selected characteristics between 1996 and 2007 is also provided to highlight changes that have occurred leading to the current composition of the home health and hospice care sector. Estimates are based on data collected on agencies from the 1996, 2000, and 2007 National Home and Hospice Care Survey, conducted by the Centers for Disease Control and Prevention's National Center for Health Statistics. Estimates are derived from data collected during interviews with administrators and staff designated by the administrators. In 2007, there were 14,500 home health and hospice care agencies in the United States, an increase from 11,400 in 2000. Three-quarters of these agencies provided home health care only, 15% provided hospice care only, and 10% provided both home health and hospice care (mixed). The percentage of proprietary home health care only and hospice care only agencies increased during 1996-2007, whereas the percentage of proprietary mixed agencies remained relatively stable. The average number of home health care patients that home health care only and mixed agencies served decreased, while the average number of hospice care patients that hospice care only agencies served increased across years. Among mixed agencies, no significant changes were observed in the average number of hospice care patients being served. The percentage of home health care only agencies offering certain therapeutic and nonmedical services declined over the years. There was an increase in the proportion of hospice care only agencies' providing many core and noncore hospice care services during 1996-2007. Also during this time, the proportion of mixed agencies providing selected nonmedical services decreased.
NASA Astrophysics Data System (ADS)
Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.
2018-01-01
Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.
An algorithm to estimate PBL heights from wind profiler data
NASA Astrophysics Data System (ADS)
Molod, A.; Salmun, H.
2016-12-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourlyarchived wind profiler data from the NOAA Profiler Network (NPN) sites located throughoutthe central United States from the period 1992-2012. The long period of record allows ananalysis of climatological mean PBL heights as well as some estimates of year to yearvariability. Under clear conditions, summertime averaged hourly time series of PBL heightscompare well with Richardson-number based estimates at the few NPN stations with hourlytemperature measurements. Comparisons with clear sky MERRA estimates show that the windprofiler (WP) and the Richardson number based PBL heights are lower by approximately 250-500 m.The geographical distribution of daily maximum WP PBL heights corresponds well with theexpected distribution based on patterns of surface temperature and soil moisture. Windprofiler PBL heights were also estimated under mostly cloudy conditions, but the WP estimatesshow a smaller clear-cloudy condition difference than either of the other two PBL height estimates.The algorithm presented here is shown to provide a reliable summer, fall and springclimatology of daytime hourly PBL heights throughout the central United States. The reliabilityof the algorithm has prompted its use to obtain hourly PBL heights from other archived windprofiler data located throughout the world.
The basic reproduction number (R0) of measles: a systematic review.
Guerra, Fiona M; Bolotin, Shelly; Lim, Gillian; Heffernan, Jane; Deeks, Shelley L; Li, Ye; Crowcroft, Natasha S
2017-12-01
The basic reproduction number, R nought (R 0 ), is defined as the average number of secondary cases of an infectious disease arising from a typical case in a totally susceptible population, and can be estimated in populations if pre-existing immunity can be accounted for in the calculation. R 0 determines the herd immunity threshold and therefore the immunisation coverage required to achieve elimination of an infectious disease. As R 0 increases, higher immunisation coverage is required to achieve herd immunity. In July, 2010, a panel of experts convened by WHO concluded that measles can and should be eradicated. Despite the existence of an effective vaccine, regions have had varying success in measles control, in part because measles is one of the most contagious infections. For measles, R 0 is often cited to be 12-18, which means that each person with measles would, on average, infect 12-18 other people in a totally susceptible population. We did a systematic review to find studies reporting rigorous estimates and determinants of measles R 0 . Studies were included if they were a primary source of R 0 , addressed pre-existing immunity, and accounted for pre-existing immunity in their calculation of R 0 . A search of key databases was done in January, 2015, and repeated in November, 2016, and yielded 10 883 unique citations. After screening for relevancy and quality, 18 studies met inclusion criteria, providing 58 R 0 estimates. We calculated median measles R 0 values stratified by key covariates. We found that R 0 estimates vary more than the often cited range of 12-18. Our results highlight the importance of countries calculating R 0 using locally derived data or, if this is not possible, using parameter estimates from similar settings. Additional data and agreed review methods are needed to strengthen the evidence base for measles elimination modelling. Copyright © 2017 Elsevier Ltd. All rights reserved.
Targeted Approach to Overcoming Treatment Resistance in Advanced Prostate Cancer
2014-09-01
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...identify analogs with higher efficacy and less toxicity. Initial results identified a small number of compounds, some of them entirely novel, that should... small number of compounds, some of them entirely novel, that should be followed up on in subsequent
2014-05-01
Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY...private-sector entity or public utility. When no lease proposals were submitted, the Air Force pursued the option to close the plant, finding that the
2010-01-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average...ACRONYM(S) AOARD 11. SPONSOR/MONITOR’S REPORT NUMBER(S) CSP-091013 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; U.S...Also, significant relevant R&D continued to be conducted by Texas A&M University (TAMU), 3 the Battelle Memorial Institute, through PNNL , and
Contribution of Security Forces Personnel to Deter Migration and Improve Stability in West Africa
2017-06-09
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per...valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 9-06-2017 2. REPORT TYPE Master’s Thesis...NAME OF RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE 19b. PHONE NUMBER (include area code) (U) (U) (U) (U) 106 Standard Form 298 (Rev
2017-06-05
DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND...THIS PAGE 19b. TELEPHONE NUMBER (include area code) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Cost & Performance Report June 2017
2006-11-01
Hampton, VA 23666 November 2006 Approved for public release: distribution is unlimited. 20070907323 ABERDEEN PROVING GROUND, MD 21010-5424 DISCLAIMER...REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION I AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY
Economic Cost and Burden of Dengue in the Philippines
Edillo, Frances E.; Halasa, Yara A.; Largo, Francisco M.; Erasmo, Jonathan Neil V.; Amoin, Naomi B.; Alera, Maria Theresa P.; Yoon, In-Kyu; Alcantara, Arturo C.; Shepard, Donald S.
2015-01-01
Dengue, the world's most important mosquito-borne viral disease, is endemic in the Philippines. During 2008–2012, the country's Department of Health reported an annual average of 117,065 dengue cases, placing the country fourth in dengue burden in southeast Asia. This study estimates the country's annual number of dengue episodes and their economic cost. Our comparison of cases between active and passive surveillance in Punta Princesa, Cebu City yielded an expansion factor of 7.2, close to the predicted value (7.0) based on the country's health system. We estimated an annual average of 842,867 clinically diagnosed dengue cases, with direct medical costs (in 2012 US dollars) of $345 million ($3.26 per capita). This is 54% higher than an earlier estimate without Philippines-specific costs. Ambulatory settings treated 35% of cases (representing 10% of direct costs), whereas inpatient hospitals served 65% of cases (representing 90% of direct costs). The economic burden of dengue in the Philippines is substantial. PMID:25510723
Economic cost and burden of dengue in the Philippines.
Edillo, Frances E; Halasa, Yara A; Largo, Francisco M; Erasmo, Jonathan Neil V; Amoin, Naomi B; Alera, Maria Theresa P; Yoon, In-Kyu; Alcantara, Arturo C; Shepard, Donald S
2015-02-01
Dengue, the world's most important mosquito-borne viral disease, is endemic in the Philippines. During 2008-2012, the country's Department of Health reported an annual average of 117,065 dengue cases, placing the country fourth in dengue burden in southeast Asia. This study estimates the country's annual number of dengue episodes and their economic cost. Our comparison of cases between active and passive surveillance in Punta Princesa, Cebu City yielded an expansion factor of 7.2, close to the predicted value (7.0) based on the country's health system. We estimated an annual average of 842,867 clinically diagnosed dengue cases, with direct medical costs (in 2012 US dollars) of $345 million ($3.26 per capita). This is 54% higher than an earlier estimate without Philippines-specific costs. Ambulatory settings treated 35% of cases (representing 10% of direct costs), whereas inpatient hospitals served 65% of cases (representing 90% of direct costs). The economic burden of dengue in the Philippines is substantial. © The American Society of Tropical Medicine and Hygiene.
Eigenvector method for umbrella sampling enables error analysis
Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.
2016-01-01
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
Golino, Hudson F.; Epskamp, Sacha
2017-01-01
The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study. PMID:28594839
Golino, Hudson F; Epskamp, Sacha
2017-01-01
The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.
Comparison and assessment of aerial and ground estimates of waterbird colonies
Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.
2008-01-01
Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Gauging the Nearness and Size of Cycle Maximum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2003-01-01
A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.
Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data
NASA Technical Reports Server (NTRS)
Molod, Andrea M.; Salmun, H.; Dempsey, M
2015-01-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.
Empirical Estimates of 0Day Vulnerabilities in Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miles A. McQueen; Wayne F. Boyer; Sean M. McBride
2009-01-01
We define a 0Day vulnerability to be any vulnerability, in deployed software, which has been discovered by at least one person but has not yet been publicly announced or patched. These 0Day vulnerabilities are of particular interest when assessing the risk to well managed control systems which have already effectively mitigated the publicly known vulnerabilities. In these well managed systems the risk contribution from 0Days will have proportionally increased. To aid understanding of how great a risk 0Days may pose to control systems, an estimate of how many are in existence is needed. Consequently, using the 0Day definition given above,more » we developed and applied a method for estimating how many 0Day vulnerabilities are in existence on any given day. The estimate is made by: empirically characterizing the distribution of the lifespans, measured in days, of 0Day vulnerabilities; determining the number of vulnerabilities publicly announced each day; and applying a novel method for estimating the number of 0Day vulnerabilities in existence on any given day using the number of vulnerabilities publicly announced each day and the previously derived distribution of 0Day lifespans. The method was first applied to a general set of software applications by analyzing the 0Day lifespans of 491 software vulnerabilities and using the daily rate of vulnerability announcements in the National Vulnerability Database. This led to a conservative estimate that in the worst year there were, on average, 2500 0Day software related vulnerabilities in existence on any given day. Using a smaller but intriguing set of 15 0Day software vulnerability lifespans representing the actual time from discovery to public disclosure, we then made a more aggressive estimate. In this case, we estimated that in the worst year there were, on average, 4500 0Day software vulnerabilities in existence on any given day. We then proceeded to identify the subset of software applications likely to be used in some control systems, analyzed the associated subset of vulnerabilities, and characterized their lifespans. Using the previously developed method of analysis, we very conservatively estimated 250 control system related 0Day vulnerabilities in existence on any given day. While reasonable, this first order estimate for control systems is probably far more conservative than those made for general software systems since the estimate did not include vulnerabilities unique to control system specific components. These control system specific vulnerabilities were unable to be included in the estimate for a variety of reasons with the most problematic being that the public announcement of unique control system vulnerabilities is very sparse. Consequently, with the intent to improve the above 0Day estimate for control systems, we first identified the additional, unique to control systems, vulnerability estimation constraints and then investigated new mechanisms which may be useful for estimating the number of unique 0Day software vulnerabilities found in control system components. We proceeded to identify a number of new mechanisms and approaches for estimating and incorporating control system specific vulnerabilities into an improved 0Day estimation method. These new mechanisms and approaches appear promising and will be more rigorously evaluated during the course of the next year.« less
Wong, Karen; Delaney, Geoff P; Barton, Michael B
2016-04-01
The recently updated optimal radiotherapy utilisation model estimated that 48.3% of all cancer patients should receive external beam radiotherapy at least once during their disease course. Adapting this model, we constructed an evidence-based model to estimate the optimal number of fractions for notifiable cancers in Australia to determine equipment and workload implications. The optimal number of fractions was calculated based on the frequency of specific clinical conditions where radiotherapy is indicated and the evidence-based recommended number of fractions for each condition. Sensitivity analysis was performed to assess the impact of variables on the model. Of the 27 cancer sites, the optimal number of fractions for the first course of radiotherapy ranged from 0 to 23.3 per cancer patient, and 1.5 to 29.1 per treatment course. Brain, prostate and head and neck cancers had the highest average number of fractions per course. Overall, the optimal number of fractions was 9.4 per cancer patient (range 8.7-10.0) and 19.4 per course (range 18.0-20.7). These results provide valuable data for radiotherapy services planning and comparison with actual practice. The model can be easily adapted by inserting population-specific epidemiological data thus making it applicable to other jurisdictions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data
NASA Technical Reports Server (NTRS)
Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.
1999-01-01
Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.
Honda, Michitaka
2014-04-01
Several improvements were implemented in the edge method of presampled modulation transfer function measurements (MTFs). The estimation technique for edge angle was newly developed by applying an algorithm for principal components analysis. The error in the estimation was statistically confirmed to be less than 0.01 even in the presence of quantum noise. Secondly, the geometrical edge slope was approximated using a rationalized number, making it possible to obtain an oversampled edge response function (ESF) with equal intervals. Thirdly, the final MTFs were estimated using the average of multiple MTFs calculated for local areas. This averaging operation eliminates the errors caused by the rationalized approximation. Computer-simulated images were used to evaluate the accuracy of our method. The relative error between the estimated MTF and the theoretical MTF at the Nyquist frequency was less than 0.5% when the MTF was expressed as a sinc function. For MTFs representing an indirect detector and phase-contrast detector, good agreement was also observed for the estimated MTFs for each. The high accuracy of the MTF estimation was also confirmed, even for edge angles of around 10 degrees, which suggests the potential for simplification of the measurement conditions. The proposed method could be incorporated into an automated measurement technique using a software application.
Edwards, Ryan W J; Celia, Michael A; Bandilla, Karl W; Doster, Florian; Kanno, Cynthia M
2015-08-04
Recent studies suggest the possibility of CO2 sequestration in depleted shale gas formations, motivated by large storage capacity estimates in these formations. Questions remain regarding the dynamic response and practicality of injection of large amounts of CO2 into shale gas wells. A two-component (CO2 and CH4) model of gas flow in a shale gas formation including adsorption effects provides the basis to investigate the dynamics of CO2 injection. History-matching of gas production data allows for formation parameter estimation. Application to three shale gas-producing regions shows that CO2 can only be injected at low rates into individual wells and that individual well capacity is relatively small, despite significant capacity variation between shale plays. The estimated total capacity of an average Marcellus Shale well in Pennsylvania is 0.5 million metric tonnes (Mt) of CO2, compared with 0.15 Mt in an average Barnett Shale well. Applying the individual well estimates to the total number of existing and permitted planned wells (as of March, 2015) in each play yields a current estimated capacity of 7200-9600 Mt in the Marcellus Shale in Pennsylvania and 2100-3100 Mt in the Barnett Shale.
The randomized benchmarking number is not what you think it is
NASA Astrophysics Data System (ADS)
Proctor, Timothy; Rudinger, Kenneth; Blume-Kohout, Robin; Sarovar, Mohan; Young, Kevin
Randomized benchmarking (RB) is a widely used technique for characterizing a gate set, whereby random sequences of gates are used to probe the average behavior of the gate set. The gates are chosen to ideally compose to the identity, and the rate of decay in the survival probability of an initial state with increasing length sequences is extracted from a set of experiments - this is the `RB number'. For reasonably well-behaved noise and particular gate sets, it has been claimed that the RB number is a reliable estimate of the average gate fidelity (AGF) of each noisy gate to the ideal target unitary, averaged over all gates in the set. Contrary to this widely held view, we show that this is not the case. We show that there are physically relevant situations, in which RB was thought to be provably reliable, where the RB number is many orders of magnitude away from the AGF. These results have important implications for interpreting the RB protocol, and immediate consequences for many advanced RB techniques. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
ERIC Educational Resources Information Center
Zeman, Anne; Kelly, Kate
This book is written to answer commonly asked homework questions of fourth, fifth, and sixth graders. Included are facts, charts, definitions, explanations, examples, and illustrations. Topics include ancient number systems; decimal system; math symbols; addition; subtraction; multiplication; division; fractions; estimation; averages; properties;…
Forest statistics for Maine: 1971 and 1982
Douglas S. Powell; David R. Dickson
1984-01-01
A statistical report on the third forest survey of Maine (1982) and reprocessed data from the second survey (1971). Results of the surveys are displayed in a 169 tables containing estimates of forest and timberland area, numbers of trees, timber volume, tree biomass, timber products output, and components of average annual net change in growing-stock volume for the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
... to stub beam cracking, caused by high flight cycle stresses from both pressurization and maneuver... high flight cycle stresses from both pressurization and maneuver loads. Reduced structural integrity of.... operators to comply with this proposed AD. Estimated Costs Number of U.S.- Action Work hours Average labor...
75 FR 34107 - Commission Information Collection Activities (FERC-547); Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... from the estimate made three years ago due to a reduction in the average number of filings received... authorize the Commission to order a refund, with interest, for any portion of a natural gas company's... gas companies to ensure that the flow-through of refunds owed by these companies are made as...
Maximum number of children per sperm donor based on false paternity rate.
Sánchez-Castelló, Isabel M; Gonzalvo, María C; Clavero, Ana; López-Regalado, María L; Mozas, Juan; Martínez-Granados, Luis; Navas, Purificación; Castilla, José A
2017-03-01
The aim of this study is to estimate the weight of each relevant factor in such unions of inadvertent consanguinity and to determine a "reasonable" limit for the number of children per donor, matching the probability of inadvertent consanguinity arising from the use of sperm donor in assisted reproduction with that of such a union arising from false paternity. In this study, we applied to Spanish data a mathematical model of consanguineous unions, taking into account the following factors: maximum number of live births/donor, fertility rate, average number of births per donor in a pregnancy, donor success rate, matings per phenotype, number of newborns/year, and number of donors needed in the population/year and births by false paternity. In Spain, the number of inadvertent unions between descendants of the same donor in Spain has been estimated at 0.4/year (one every two and a half years), although this frequency decreases as the reference population increases. On the other hand, the frequency of unions between family members due to false paternity has been estimated at 6.1/year. Thus, only 6% of such unions are due to the use of donor sperm. A total of 25 children per sperm donor are needed to align the probability of inadvertant consanguinity arising from the use of assisted reproduction with that due to false paternity. Therefore, we consider this number to be the maximum "reasonable" number of children born per donor in Spain.
Schneider, J F; Rempel, L A; Rohrer, G A; Brown-Brandl, T M
2011-11-01
The primary objective of this study was to determine if certain behavior traits were genetically correlated with reproduction. If 1 or both of the behavior traits were found to be correlated, a secondary objective was to determine if the behavior traits could be useful in selecting for more productive females. A scale activity score taken at 5 mo of age and a farrowing disposition score taken at farrowing were selected as the behavioral traits. Scale activity score ranged from 1 to 5 and farrowing disposition ranged from 1 to 3. Reproductive traits included age at puberty, number born alive, number born dead, litter birth weight, average piglet birth weight, number weaned, litter weaning weight, average weaning weight, wean-to-estrus interval, ovulation rate including gilts, and postweaning ovulation rate. Genetic correlations between scale activity score and reproduction ranged from -0.79 to 0.61. Three of the correlations, number born alive (P < 0.01), average piglet birth weight (P < 0.001), and wean-to-estrus interval (P = 0.014), were statistically significant but included both favorable and antagonistic correlations. In contrast, all but 1 of the farrowing disposition correlations was favorable and ranged from -0.66 to 0.67. Although only the correlation with litter birth weight was significant (P = 0.018), the consistent favorable direction of all farrowing disposition correlations, except average weaning weight, shows a potential for inclusion of farrowing disposition into a selection program.
Optimization of Scan Parameters to Reduce Acquisition Time for Diffusion Kurtosis Imaging at 1.5T.
Yokosawa, Suguru; Sasaki, Makoto; Bito, Yoshitaka; Ito, Kenji; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Kudo, Kohsuke
2016-01-01
To shorten acquisition of diffusion kurtosis imaging (DKI) in 1.5-tesla magnetic resonance (MR) imaging, we investigated the effects of the number of b-values, diffusion direction, and number of signal averages (NSA) on the accuracy of DKI metrics. We obtained 2 image datasets with 30 gradient directions, 6 b-values up to 2500 s/mm(2), and 2 signal averages from 5 healthy volunteers and generated DKI metrics, i.e., mean, axial, and radial kurtosis (MK, K∥, and K⊥) maps, from various combinations of the datasets. These maps were estimated by using the intraclass correlation coefficient (ICC) with those from the full datasets. The MK and K⊥ maps generated from the datasets including only the b-value of 2500 s/mm(2) showed excellent agreement (ICC, 0.96 to 0.99). Under the same acquisition time and diffusion directions, agreement was better of MK, K∥, and K⊥ maps obtained with 3 b-values (0, 1000, and 2500 s/mm(2)) and 4 signal averages than maps obtained with any other combination of numbers of b-value and varied NSA. Good agreement (ICC > 0.6) required at least 20 diffusion directions in all the metrics. MK and K⊥ maps with ICC greater than 0.95 can be obtained at 1.5T within 10 min (b-value = 0, 1000, and 2500 s/mm(2); 20 diffusion directions; 4 signal averages; slice thickness, 6 mm with no interslice gap; number of slices, 12).
A Comparison of the Diagnostic Accuracy of Common Office Blood Pressure Monitoring Protocols.
Kronish, I M; Edmondson, D; Shimbo, D; Shaffer, J A; Krakoff, L R; Schwartz, J E
2018-04-20
The optimal approach to measuring office blood pressure (BP) is uncertain. We aimed to compare BP measurement protocols that differed based on numbers of readings within and between visits and by assessment method. We enrolled a sample of 707 employees without known hypertension or cardiovascular disease, and obtained 6 standardized BP readings during each of 3 office visits at least 1 week apart, using mercury sphygmomanometer and BpTRU oscillometric devices (18 readings per participant) for a total of 12,645 readings. We used confirmatory factor analysis to develop a model estimating "true" office BP that could be used to compare the probability of correctly classifying participants' office BP status using differing numbers and types of office BP readings. Averaging two systolic BP readings across two visits correctly classified participants as having BP below or above the 140 mmHg threshold at least 95% of the time if the averaged reading was <134 mmHg or >149 mmHg, respectively. Our model demonstrated that more confidence was gained by increasing the number of visits with readings than by increasing the number of readings within a visit. No clinically significant confidence was gained by dropping the first reading versus averaging all readings, nor by measuring with a manual mercury device versus with an automated oscillometric device. Averaging two BP readings across two office visits appeared to best balance increased confidence in office BP status with efficiency of BP measurement, though the preferred measurement strategy may vary with the clinical context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utsunomiya, S; Kushima, N; Katsura, K
Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cmmore » × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.« less
A Comparison of Two Methods for Initiating Air Mass Back Trajectories
NASA Astrophysics Data System (ADS)
Putman, A.; Posmentier, E. S.; Faiia, A. M.; Sonder, L. J.; Feng, X.
2014-12-01
Lagrangian air mass tracking programs in back cast mode are a powerful tool for estimating the water vapor source of precipitation events. The altitudes above the precipitation site where particle's back trajectories begin influences the source estimation. We assume that precipitation comes from water vapor in condensing regions of the air column, so particles are placed in proportion to an estimated condensation profile. We compare two methods for estimating where condensation occurs and the resulting evaporation sites for 63 events at Barrow, AK. The first method (M1) uses measurements from a 35 GHz vertically resolved cloud radar (MMCR), and algorithms developed by Zhao and Garrett1 to calculate precipitation rate. The second method (M2) uses the Global Data Assimilation System reanalysis data in a lofting model. We assess how accurately M2, developed for global coverage, will perform in absence of direct cloud observations. Results from the two methods are statistically similar. The mean particle height estimated by M2 is, on average, 695 m (s.d. = 1800 m) higher than M1. The corresponding average vapor source estimated by M2 is 1.5⁰ (s.d. = 5.4⁰) south of M1. In addition, vapor sources for M2 relative to M1 have ocean surface temperatures averaging 1.1⁰C (s.d. = 3.5⁰C) warmer, and reported ocean surface relative humidities 0.31% (s.d. = 6.1%) drier. All biases except the latter are statistically significant (p = 0.02 for each). Results were skewed by events where M2 estimated very high altitudes of condensation. When M2 produced an average particle height less than 5000 m (89% of events), M2 estimated mean particle heights 76 m (s.d. = 741 m) higher than M1, corresponding to a vapor source 0.54⁰ (s.d. = 4.2⁰) south of M1. The ocean surface at the vapor source was an average of 0.35⁰C (s.d. = 2.35⁰C) warmer and ocean surface relative humidities were 0.02% (s.d. = 5.5%) wetter. None of the biases was statistically significant. If the vapor source meteorology estimated by M2 is used to determine vapor isotopic properties it would produce results similar to M1 in all cases except the occasional very high cloud. The methods strive to balance a sufficient number of tracked air masses for meaningful vapor source estimation with minimal computational time. Zhao, C and Garrett, T.J. 2008, J. Geophys. Res.
The development and evaluation of accident predictive models
NASA Astrophysics Data System (ADS)
Maleck, T. L.
1980-12-01
A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.
Transonic transport study: Economics
NASA Technical Reports Server (NTRS)
Smith, C. L.; Wilcox, D. E.
1972-01-01
An economic analysis was performed to evaluate the impact of advanced materials, increased aerodynamic and structural efficiencies, and cruise speed on advanced transport aircraft designed for cruise Mach numbers of .90, .98, and 1.15. A detailed weight statement was generated by an aircraft synthesis computer program called TRANSYN-TST; these weights were used to estimate the cost to develop and manufacture a fleet of aircraft of each configuration. The direct and indirect operating costs were estimated for each aircraft, and an average return on investment was calculated for various operating conditions. There was very little difference between the operating economics of the aircraft designed for Mach numbers .90 and .98. The Mach number 1.15 aircraft was economically marginal in comparison but showed significant improvements with the application of carbon/epoxy structural material. However, the Mach .90 and Mach .98 aircraft are the most economically attractive vehicles in the study.
Gómez-Restrepo, Carlos; Naranjo-Lujan, Salomé; Rondón, Martín; Acosta, Andrés; Maldonado, Patricia; Arango Villegas, Carlos; Hurtado, Jaime; Hernández, Juan Carlos; Angarita, María Del Pilar; Peña, Marcela; Saavedra, Miguel Ángel; Quitian, Hoover
2017-06-01
In Colombia, some studies have estimated medical costs associated to traffic accidents. It is required to assess results by city or region and determine the influence of variables such as alcohol consumption. The main objective of this study was to identify health care costs associated to traffic accidents in Bogota and determine whether alcohol consumption can increase them. Cross-sectional costs study conducted in patients over 18 years treated in the emergency rooms of six different hospitals in Bogota, Colombia. The average total cost of medical care per patient was 628 USD, in Bogota-Colombia. The average cost per accident was estimated at 1,349 USD. On average, the total cost for health care for patients with positive blood alcohol level was 1.8 times higher than those who did not consume alcohol. The indirect costs were on average 115.3 USD per injured person. Numbers are expressed in 2011 U.S. dollars. Alcohol consumption increases the risk of traffic accidents and direct medical health costs. Copyright © 2016 Elsevier Inc. All rights reserved.
Gibbs Energy Additivity Approaches in Estimation of Dynamic Viscosities of n-Alkane-1-ol
NASA Astrophysics Data System (ADS)
Phankosol, S.; Krisnangkura, K.
2017-09-01
Alcohols are solvents for organic and inorganic substances. Dynamic viscosity of liquid is important transport properties. In this study models for estimating n-alkan-1-ol dynamic viscosities are correlated to the Martin’s rule of free energy additivity. Data available in literatures are used to validate and support the proposed equations. The dynamic viscosities of n-alkan-1-ol can be easily estimated from its carbon numbers (nc) and temperatures (T). The bias, average absolute deviation and coefficient of determination (R2) in estimating of n-alkan-1-ol are -0.17%, 1.73% and 0.999, respectively. The dynamic viscosities outside temperature between 288.15 and 363.15 K may be possibly estimated by this model but accuracy may be lower.
Prata, Ndola; Downing, Janelle; Bell, Suzanne; Weidert, Karen; Godefay, Hagos; Gessessew, Amanuel
2016-06-01
To provide a cost analysis of an injectable contraceptive program combining community-based distribution and social marketing in Tigray, Ethiopia. We conducted a cost analysis, modeling the costs and programmatic outcomes of the program's initial implementation in 3 districts of Tigray, Ethiopia. Costs were estimated from a review of program expense records, invoices, and interviews with health workers. Programmatic outcomes include number of injections and couple-year of protection (CYP) provided. We performed a sensitivity analysis on the average number of injections provided per month by community health workers (CHWs), the cost of the commodity, and the number of CHWs trained. The average programmatic CYP was US $17.91 for all districts with a substantial range from US $15.48-38.09 per CYP across districts. Direct service cost was estimated at US $2.96 per CYP. The cost per CYP was slightly sensitive to the commodity cost of the injectable contraceptives and the number of CHWs. The capacity of each CHW, measured by the number of injections sold, was a key input that drove the cost per CYP of this model. With a direct service cost of US $2.96 per CYP, this study demonstrates the potential cost of community-based social marketing programs of injectable contraceptives. The findings suggest that the cost of social marketing of contraceptives in rural communities is comparable to other delivery mechanisms with regards to CYP, but further research is needed to determine the full impact and cost-effectiveness for women and communities beyond what is measured in CYP. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ambarita, H.
2018-03-01
The Government of Indonesia (GoI) has a strong commitment to the target of decreasing energy intensity and reducing Greenhouse gas emissions. One of the significant solutions to reach the target is increasing energy efficiency in the lighting system in the residential sector. The objective of this paper is twofold, to estimate the potency of energy saving and emission reduction from lighting in the residential sector. Literature related to the lighting system in Indonesia has been reviewed to provide sufficient data for the estimation of the energy saving and emission reduction. The results show that the in the year 2016, a total of 95.33 TWh of nationally produced electricity is used in the residential sector. This is equal to 44% of total produced electricity. The number of costumers is 64.78 million houses. The average number of lamps and average wattage of lamps used in Indonesia are 8.35 points and 13.8 W, respectively. The number of lighting and percentage of electricity used for lighting in the residential sector in Indonesia are 20.03 TWh (21.02 %) and 497 million lamps, respectively. The projection shows that in the year 2026 the total energy for lighting and number of lamps in the residential sector are 25.05 TWh and 619 million, respectively. By promoting the present technology of high efficient lamps (LED), the potency of energy saving and emission reduction in 2026 are 2.6 TWh and 2.1 million tons CO2eq, respectively.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
Data-driven confounder selection via Markov and Bayesian networks.
Häggström, Jenny
2018-06-01
To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.
Lee, Eugenia E; Stewart, Barclay; Zha, Yuanting A; Groen, Thomas A; Burkle, Frederick M; Kushner, Adam L
2016-08-10
Climate extremes will increase the frequency and severity of natural disasters worldwide. Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed. The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People's Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
Carpooling: status and potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendall, D.C.
1975-06-01
Studies were conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented together with analyses revealing characteristics of carpool trips, incentives, impacts of increased carpooling and issues related to carpool matching services. National survey results indicate the average auto occupancy for urban work-trip is 1.2 passengers per auto. This value, and average carpool occupancy of 2.5, have been relatively stable over the last five years. An increase in work-trip occupancy from 1.2 to 1.8 would require a 100% increasemore » in the number of carpoolers. A model was developed to predict the maximum potential level of carpooling in an urban area. Results from applying the model to the Boston region were extrapolated to estimate a maximum nationwide potential between 47 and 71% of peak period auto commuters. Maximum benefits of increased carpooling include up to 10% savings in auto fuel consumption. A technique was developed for estimating the number of participants required in a carpool matching service to achieve a chosen level of matching among respondents, providing insight into tradeoffs between employer and regional or centralized matching services. Issues recommended for future study include incentive policies and their impacts on other modes, and the evaluation of new and ongoing carpool matching services. (11 references) (GRA)« less
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-04-01
Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-12-01
We investigated the symbiont-bearing benthic foraminifer Palaeonummulites venosus to determine the chamber building rate (CBR), test diameter increase rate (DIR), reproduction time and longevity using the `natural laboratory' approach. This is based on the decomposition of monthly obtained frequency distributions of chamber number and test diameter into normally distributed components. Test measurements were taken using MicroCT. The shift of the mean and standard deviation of component parameters during the 15-month investigation period was used to calculate Michaelis-Menten functions applied to estimate the averaged CBR and DIR under natural conditions. The individual dates of birth were estimated using the inverse averaged CBR and the inverse DIR fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e., frequency divided by sediment weight) based on both CBR and DIR revealed continuous reproduction throughout the year with two peaks, a stronger one in June determined as the onset of the summer generation (generation 1) and a weaker one in November determined as the onset of the winter generation (generation 2). This reproduction scheme explains the presence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date, is approximately 1.5 yr, an estimation obtained by using both CBR and DIR.
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Guinness, L; Kumaranayake, L; Reddy, Bhaskar; Govindraj, Y; Vickerman, P; Alary, M
2010-01-01
Background The India AIDS Initiative (Avahan) project is involved in rapid scale-up of HIV-prevention interventions in high-risk populations. This study examines the cost variation of 107 non-governmental organisations (NGOs) implementing targeted interventions, over the start up (defined as period from project inception until services to the key population commenced) and first 2 years of intervention. Methods The Avahan interventions for female and male sex workers and their clients, in 62 districts of four southern states were costed for the financial years 2004/2005 and 2005/2006 using standard costing techniques. Data sources include financial and economic costs from the lead implementing partners (LPs) and subcontracted local implementing NGOs retrospectively and prospectively collected from a provider perspective. Ingredients and step-down allocation processes were used. Outcomes were measured using routinely collected project data. The average costs were estimated and a regression analysis carried out to explore causes of cost variation. Costs were calculated in US$ 2006. Results The total number of registered people was 134 391 at the end of 2 years, and 124 669 had used STI services during that period. The median average cost of Avahan programme for this period was $76 per person registered with the project. Sixty-one per cent of the cost variation could be explained by scale (positive association), number of NGOs per district (negative), number of LPs in the state (negative) and project maturity (positive) (p<0.0001). Conclusions During rapid scale-up in the initial phase of the Avahan programme, a significant reduction in average costs was observed. As full scale-up had not yet been achieved, the average cost at scale is yet to be realised and the extent of the impact of scale on costs yet to be captured. Scale effects are important to quantify for planning resource requirements of large-scale interventions. The average cost after 2 years is within the range of global scale-up costs estimates and other studies in India. PMID:20167740
Chandrashekar, S; Guinness, L; Kumaranayake, L; Reddy, Bhaskar; Govindraj, Y; Vickerman, P; Alary, M
2010-02-01
The India AIDS Initiative (Avahan) project is involved in rapid scale-up of HIV-prevention interventions in high-risk populations. This study examines the cost variation of 107 non-governmental organisations (NGOs) implementing targeted interventions, over the start up (defined as period from project inception until services to the key population commenced) and first 2 years of intervention. The Avahan interventions for female and male sex workers and their clients, in 62 districts of four southern states were costed for the financial years 2004/2005 and 2005/2006 using standard costing techniques. Data sources include financial and economic costs from the lead implementing partners (LPs) and subcontracted local implementing NGOs retrospectively and prospectively collected from a provider perspective. Ingredients and step-down allocation processes were used. Outcomes were measured using routinely collected project data. The average costs were estimated and a regression analysis carried out to explore causes of cost variation. Costs were calculated in US$ 2006. The total number of registered people was 134,391 at the end of 2 years, and 124,669 had used STI services during that period. The median average cost of Avahan programme for this period was $76 per person registered with the project. Sixty-one per cent of the cost variation could be explained by scale (positive association), number of NGOs per district (negative), number of LPs in the state (negative) and project maturity (positive) (p<0.0001). During rapid scale-up in the initial phase of the Avahan programme, a significant reduction in average costs was observed. As full scale-up had not yet been achieved, the average cost at scale is yet to be realised and the extent of the impact of scale on costs yet to be captured. Scale effects are important to quantify for planning resource requirements of large-scale interventions. The average cost after 2 years is within the range of global scale-up costs estimates and other studies in India.
Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.
2015-04-01
Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure. The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).
The trade-off between hospital cost and quality of care. An exploratory empirical analysis.
Morey, R C; Fine, D J; Loree, S W; Retzlaff-Roberts, D L; Tsubakitani, S
1992-08-01
The debate concerning quality of care in hospitals, its "value" and affordability, is increasingly of concern to providers, consumers, and purchasers in the United States and elsewhere. We undertook an exploratory study to estimate the impact on hospital-wide costs if quality-of-care levels were varied. To do so, we obtained costs and service output data regarding 300 U.S. hospitals, representing approximately a 5% cross section of all hospitals operating in 1983; both inpatient and outpatient services were included. The quality-of-care measure used for the exploratory analysis was the ratio of actual deaths in the hospital for the year in question to the forecasted number of deaths for the hospital; the hospital mortality forecaster had earlier (and elsewhere) been built from analyses of 6 million discharge abstracts, and took into account each hospital's actual individual admissions, including key patient descriptors for each admission. Such adjusted death rates have increasingly been used as potential indicators of quality, with recent research lending support for the viability of that linkage. The authors then utilized the economic construct of allocative efficiency relying on "best practices" concepts and peer groupings, built using the "envelopment" philosophy of Data Envelopment Analysis and Pareto efficiency. These analytical techniques estimated the efficiently delivered costs required to meet prespecified levels of quality of care. The marginal additional cost per each death deferred in 1983 was estimated to be approximately $29,000 (in 1990 dollars) for the average efficient hospital. Also, over a feasible range, a 1% increase in the level of quality of care delivered was estimated to increase hospital cost by an average of 1.34%. This estimated elasticity of quality on cost also increased with the number of beds in the hospital.
Cori, Anne; Boëlle, Pierre-Yves; Thomas, Guy; Leung, Gabriel M; Valleron, Alain-Jacques
2009-08-01
The extent to which self-adopted or intervention-related changes in behaviors affect the course of epidemics remains a key issue for outbreak control. This study attempted to quantify the effect of such changes on the risk of infection in different settings, i.e., the community and hospitals. The 2002-2003 severe acute respiratory syndrome (SARS) outbreak in Hong Kong, where 27% of cases were healthcare workers, was used as an example. A stochastic compartmental SEIR (susceptible-exposed-infectious-removed) model was used: the population was split into healthcare workers, hospitalized people and general population. Super spreading events (SSEs) were taken into account in the model. The temporal evolutions of the daily effective contact rates in the community and hospitals were modeled with smooth functions. Data augmentation techniques and Markov chain Monte Carlo (MCMC) methods were applied to estimate SARS epidemiological parameters. In particular, estimates of daily reproduction numbers were provided for each subpopulation. The average duration of the SARS infectious period was estimated to be 9.3 days (+/-0.3 days). The model was able to disentangle the impact of the two SSEs from background transmission rates. The effective contact rates, which were estimated on a daily basis, decreased with time, reaching zero inside hospitals. This observation suggests that public health measures and possible changes in individual behaviors effectively reduced transmission, especially in hospitals. The temporal patterns of reproduction numbers were similar for healthcare workers and the general population, indicating that on average, an infectious healthcare worker did not infect more people than any other infectious person. We provide a general method to estimate time dependence of parameters in structured epidemic models, which enables investigation of the impact of control measures and behavioral changes in different settings.
Fish losses to double-crested cormorant predation in Eastern Lake Ontario, 1992-97
Ross, Robert M.; Johnson, James H.
1999-01-01
We examined 4,848 regurgitated digestive pellets of double-crested cormorants (Phalacrocorax auritus) over a 6-year period (1992–97) to estimate annual predation on sport and other fishes in the eastern basin of Lake Ontario. We found more than 51,000 fish of 28 species. Using a model that incorporates annual colony nest counts; fledgling production rates; adult, immature, and young-of-year residence times (seasonal); estimates of mean number of fish per pellet and mean fish size; and a fecal pathway correction factor (4.0 percent), we estimate total annual number of fish consumed by cormorants in the eastern basin of Lake Ontario to range from 37 million to 128 million fish for 1993–97. This fish loss equates to an estimated 0.93 million to 3.21 million kg (mean 2.07 million kg) of fish consumed per year, principally alewife (Alosa pseudoharengus, 42.3 percent) and yellow perch (Perca flavescens, 18.4 percent). Forage fish (alewife, cyprinids, trout-perch [Percopsis omiscomaycus], and other minor components) accounted for 65 percent of the diet, and panfish contributed 34 percent of the diet for the 5-year period. Game fish were minor components of the diet, in view of an average estimated annual consumption of 900,000 smallmouth bass (Micropterus dolomieui, 1.1 percent) and 168,000 salmonines (mostly lake trout, Salvelinus namaycush, 0.2 percent). Cormorant predation on lake trout fingerlings stocked in May 1993 and June 1994 was estimated through the use of coded wire tag recoveries from pellets collected on Little Galloo Island 1 and 4 days after stocking events. We estimated losses of 13.6 percent and 8.8 percent, respectively, of the fish stocked for the two events, an average of 11.2 percent. Such losses may be reduced through alteration of existing stocking practices.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jen, M; Yan, F; Tseng, Y
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less
Motor unit number estimation based on high-density surface electromyography decomposition.
Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun
2016-09-01
To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Rochus, Christina M; Johansson, Anna M
2017-01-01
Breeds with small population size are in danger of an increased inbreeding rate and loss of genetic diversity, which puts them at risk for extinction. In Sweden there are a number of local breeds, native breeds which have adapted to specific areas in Sweden, for which efforts are being made to keep them pure and healthy over time. One example of such a breed is the Swedish Gute sheep. The objective of this study was to estimate inbreeding and genetic diversity of Swedish Gute sheep. Three datasets were analysed: pedigree information of the whole population, pedigree information for 100 animals of the population, and microsatellite genotypes for 94 of the 100 animals. The average inbreeding coefficient for lambs born during a six year time period (2007-2012) did not increase during that time period. The inbreeding calculated from the entire pedigree (0.038) and for a sample of the population (0.018) was very low. Sheep were more heterozygous at the microsatellite markers than expected (average multilocus heterozygosity and Ritland inbreeding estimates 1.01845 and -0.03931) and five of seven microsatellite markers were not in Hardy Weinberg equilibrium due to heterozygosity excess. The total effective population size estimated from the pedigree information was 155.4 and the average harmonic mean effective population size estimated from microsatellites was 88.3. Pedigree and microsatellite genotype estimations of inbreeding were consistent with a breeding program with the purpose of reducing inbreeding. Our results showed that current breeding programs of the Swedish Gute sheep are consistent with efforts of keeping this breed viable and these breeding programs are an example for other small local breeds in conserving breeds for the future.
Sweat, M; O'Donnell, C; O'Donnell, L
2001-04-13
Decisions about the dissemination of HIV interventions need to be informed by evidence of their cost-effectiveness in reducing negative health outcomes. Having previously shown the effectiveness of a single-session video-based group intervention (VOICES/VOCES) in reducing incidence of sexually transmitted diseases (STD) among male African American and Latino clients attending an urban STD clinic, this study estimates its cost-effectiveness in terms of disease averted. Cost-effectiveness was calculated using data on effectiveness from a randomized clinical trial of the VOICES/VOCES intervention along with updated data on the costs of intervention from four replication sites. STD incidence and self-reported behavioral data were used to make estimates of reduction in HIV incidence among study participants. The average annual cost to provide the intervention to 10 000 STD clinic clients was estimated to be US$447 005, with a cost per client of US$43.30. This expenditure would result in an average of 27.69 HIV infections averted, with an average savings from averted medical costs of US$5 544 408. The number of quality adjusted life years saved averaged 387.61, with a cost per HIV infection averted of US$21 486. This brief behavioral intervention was found to be feasible and cost-saving when targeted to male STD clinic clients at high risk of contracting and transmitting infections, indicating that this strategy should be considered for inclusion in HIV prevention programming.
Van der Kloot, W
1988-01-01
1. Following motor nerve stimulation there is a period of greatly enhanced quantal release, called the early release period or ERP (Barrett & Stevens, 1972b). Until now, measurements of the probability of quantal releases at different points in the ERP have come from experiments in which quantal output was greatly reduced, so that the time of release of individual quanta could be detected or so that the latency to the release of the first quantum could be measured. 2. A method has been developed to estimate the timing of quantal release during the ERP that can be used at much higher levels of quantal output. The assumption is made that each quantal release generates an end-plate current (EPC) that rises instantaneously and then decays exponentially. The peak amplitude of the quantal currents and the time constant for their decay are measured from miniature end-plate currents (MEPCs). Then a number of EPCs are averaged, and the times of release of the individual quanta during the ERP estimated by a simple mathematical method for deconvolution derived by Cohen, Van der Kloot & Attwell (1981). 3. The deconvolution method was tested using data from preparations in high-Mg2+ low-Ca2+ solution. One test was to reconstitute the averaged EPCs from the estimated times of quantal release and the quantal currents, by using Fourier convolution. The reconstructions fit well to the originals. 4. Reconstructions were also made from averaged MEPCs which do not rise instantaneously and the estimated times of quantal release.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2466987
2008-04-04
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 04 APR 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to... 1 Issue for Congress
2008-09-12
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 12 SEP 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008... 1 Issue for Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Scope, Sources, and
2008-08-11
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 11 AUG 2008 2. REPORT TYPE N/A 3. DATES COVERED - 4... 1 Issue for Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Scope, Sources
2008-02-04
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 04 FEB 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008...events warrant. Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-17
... statute provides that the Institute will not develop a dollars-per-quality-life-year estimate as a... have practical utility; How the quality, utility, and clarity of the information to be collected may be... plan sponsors of calculating the average number of lives covered for the applicable policy year or plan...
Waste-to-Energy Projects at Army Installations
2011-01-13
JAN 2011 US Army Corps of Engineers BUILDING STRONG® Distribution Statement A -- Approved for public release; distribution is unlimited. Report...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES presented at the DOE
Student Learning and Student Services: Policy Issues
ERIC Educational Resources Information Center
SchWeber, Claudine
2008-01-01
An increasing number of students in the United States are involved in online education, according to research by the Sloan Foundation. By fall 2004, approximately 2.6 million students were estimated to be enrolled in at least one online course, an average growth rate of 24.8% from 2003-04; this figure represents a 5% increase over the 2002-03…
Radiation Biomarker Research Using Mass Spectrometry
2007-07-01
also warns of common pitfalls of biomarker analyses. Gerd Assmann18 chose HDL (high density lipoprotein) detection. Known as the “ good ...this collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data... RESPONSIBLE PERSON Walter G. Hubert OF ABSTRACT a. REPORT b. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (include area code) Unclassified
Cold Regions - Environmental Testing of Individual Soldier Equipment
2011-10-17
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 ...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 17-10-2011 2. REPORT...TOP. 15. SUBJECT TERMS Soldier equipment HFE MANPRINT cold region environment CRTC 16. SECURITY
Trauma-Informed Guilt Reduction (TrIGR) Intervention
2017-10-01
AWARD NUMBER: W81XWH-15-1-0331 TITLE: Trauma- Informed Guilt Reduction (TrIGR) Intervention PRINCIPAL INVESTIGATOR: Christy Capone, PhD...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments
An Analytical Framework for Fast Estimation of Capacity and Performance in Communication Networks
2012-01-25
standard random graph (due to Erdos- Renyi ) in the regime where the average degrees remain fixed (and above 1) and the number of nodes get large, is not...abs/1010.3305 (Oct 2010). [6] O. Narayan, I. Saniee, G. H. Tucci, “Lack of Spectral Gap and Hyperbolicity in Asymptotic Erdös- Renyi Random Graphs
Is Managing Academics "Women's Work"? Exploring the Glass Cliff in Higher Education Management
ERIC Educational Resources Information Center
Peterson, Helen
2016-01-01
Sweden is among the countries with the highest per cent of women university Vice Chancellors in Europe. In "She Figures 2012" the average proportion of female Vice Chancellors in the 27 European Union countries is estimated to be 10 per cent. In Sweden the number is much higher: 43 per cent. Swedish higher education management has…
DSM-5 Autism Spectrum Disorder Symptomology in Fictional Picture Books
ERIC Educational Resources Information Center
Kelley, Jane E.; Cardon, Teresa A.; Algeo-Nichols, Dana
2015-01-01
In the last decade, schools have seen an increasing number of children with autism spectrum disorder (ASD) and the current estimated average of children in the United States who are diagnosed with an ASD is one out of 68 (Centers for Disease Control and Prevention, 2014). One way for educators and elementary students to learn about ASD is through…
NASA Technical Reports Server (NTRS)
Lepping, R. P.; Benhannon, K. W.
1980-01-01
The characteristics of directional discontinuities (DD's) in the interplanetary magnetic field are studied using data from the Mariner 10 primary mission between 1.0 and 0.46 AU. Statistical and visual survey methods for DD identification resulted in a total of 644 events. Two methods were used to estimate the ratio of the number of tangential discontinuities (TD's) to the number of rotational discontinuities (RD's). Both methods show that the ratio of TD's to RD's varied with time and decreased with decreasing radial distance. A decrease in average discontinuity thickness of approx. 40 percent was found between 1.0 and 0.72 AU and approx. 54 percent between 1.0 and 0.46 AU, independent of type (TD or RD). This decrease in thickness for decreasing r is in qualitative agreement with Pioneer 10 observations between 1 and 5 AU. When the individual DD thickness are normalized with respect to the estimated local proton gyroradius (RA sub L), the average thickness at the three locations is nearly constant, 43 + or - 6 R sub L. This also holds true for both RD's and TD's separately. Statistical distributions of other properties, such as normal components and discontinuity plane angles, are presented.
Substantial large-scale feedbacks between natural aerosols and climate
NASA Astrophysics Data System (ADS)
Scott, C. E.; Arnold, S. R.; Monks, S. A.; Asmi, A.; Paasonen, P.; Spracklen, D. V.
2018-01-01
The terrestrial biosphere is an important source of natural aerosol. Natural aerosol sources alter climate, but are also strongly controlled by climate, leading to the potential for natural aerosol-climate feedbacks. Here we use a global aerosol model to make an assessment of terrestrial natural aerosol-climate feedbacks, constrained by observations of aerosol number. We find that warmer-than-average temperatures are associated with higher-than-average number concentrations of large (>100 nm diameter) particles, particularly during the summer. This relationship is well reproduced by the model and is driven by both meteorological variability and variability in natural aerosol from biogenic and landscape fire sources. We find that the calculated extratropical annual mean aerosol radiative effect (both direct and indirect) is negatively related to the observed global temperature anomaly, and is driven by a positive relationship between temperature and the emission of natural aerosol. The extratropical aerosol-climate feedback is estimated to be -0.14 W m-2 K-1 for landscape fire aerosol, greater than the -0.03 W m-2 K-1 estimated for biogenic secondary organic aerosol. These feedbacks are comparable in magnitude to other biogeochemical feedbacks, highlighting the need for natural aerosol feedbacks to be included in climate simulations.
NASA Technical Reports Server (NTRS)
Houlahan, Padraig; Scalo, John
1992-01-01
A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.
NASA Technical Reports Server (NTRS)
Franklin, Janet; Simonett, David
1988-01-01
The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.
Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar
2013-01-01
Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405
Cost-effectiveness in fall prevention for older women.
Hektoen, Liv F; Aas, Eline; Lurås, Hilde
2009-08-01
The aim of this study was to estimate the cost-effectiveness of implementing an exercise-based fall prevention programme for home-dwelling women in the > or = 80-year age group in Norway. The impact of the home-based individual exercise programme on the number of falls is based on a New Zealand study. On the basis of the cost estimates and the estimated reduction in the number of falls obtained with the chosen programme, we calculated the incremental costs and the incremental effect of the exercise programme as compared with no prevention. The calculation of the average healthcare cost of falling was based on assumptions regarding the distribution of fall injuries reported in the literature, four constructed representative case histories, assumptions regarding healthcare provision associated with the treatment of the specified cases, and estimated unit costs from Norwegian cost data. We calculated the average healthcare costs per fall for the first year. We found that the reduction in healthcare costs per individual for treating fall-related injuries was 1.85 times higher than the cost of implementing a fall prevention programme. The reduction in healthcare costs more than offset the cost of the prevention programme for women aged > or = 80 years living at home, which indicates that health authorities should increase their focus on prevention. The main intention of this article is to stipulate costs connected to falls among the elderly in a transparent way and visualize the whole cost picture. Cost-effectiveness analysis is a health policy tool that makes politicians and other makers of health policy conscious of this complexity.
Lung cancer prevalence associated with radon exposure in Norwegian homes.
Hassfjell, Christina Søyland; Grimsrud, Tom Kristian; Standring, William J F; Tretli, Steinar
2017-08-22
Radioactive radon gas is generated from uranium and thorium in underlying rocks and seeps into buildings. The gas and its decay products emit carcinogenic radiation and are regarded as the second most important risk factor for lung cancer after active tobacco smoking. The average radon concentration in Norwegian homes is higher than in most other Western countries. From a health and cost perspective, it is important to be able to quantify the risk of lung cancer posed by radon exposure. We estimated the radon-related risk of lung cancer in Norway based on risk estimates from the largest pooled analysis of European case-control studies, combined with the hitherto largest set of data on radon concentration measurements in Norwegian homes. Based on these estimates, we calculate that radon is a contributory factor in 12 % of all cases of lung cancer annually, assuming an average radon concentration of 88 Bq/m3 in Norwegian homes. For 2015, this accounted for 373 cases of lung cancer, with an approximate 95 % confidence interval of 145 – 682. Radon most likely contributes to a considerable number of cases of lung cancer. Since most cases of radon-associated lung cancer involve smokers or former smokers, a reduction of the radon concentration in homes could be a key measure to reduce the risk, especially for persons who are unable to quit smoking. The uncertainty in the estimated number of radon-associated cases can be reduced through a new national radon mapping study with an improved design.
Reinecke, R D; Steinberg, T
1981-04-01
This is the second in the series of Ophthalmology Manpower Studies. Part I presented estimates of disease prevalence and incidence, the average amount of time required to care for such conditions, and based on that information, the total hours of ophthalmological services required to care for all the projected need in the population. Using different estimates of the average number of hours worked per year per ophthalmologist (based on a 35, 40 and 48 hours/week in patient care), estimates of the total number of ophthalmologists required were calculated. This method is basically similar to the method later adopted by the Graduate Medical Education National Advisory Committee (GMENAC) to arrive at estimates of hours of ophthalmological services required for 1990. However, instead of using all the need present in the population, the GMENAC panel chose to use an "adjusted-needs based" model as a compromise between total need and actual utilization, the former being an overestimation and the latter being an underestimation since it is in part a function of the barriers to medical care. Since some of these barriers to medical care include informational factors, as well as availability and accessibility, this study was undertaken to assess the utilization of these services and the adequacy of present ophthalmological manpower in the opinion of the consumer. Also, since the consumer's choice or behavior depends on being informed about the differences between optometrists and ophthalmologists, such knowledge was assessed and the responses further evaluated after explanatory statements were made to the responders.
Ginsberg, Howard; Lee, Chong; Volson, Barry; Dyer, Megan C.; LeBrun, Roger A.
2017-01-01
The relationship between engorgement weight of female Ixodes scapularis Say and characteristics of offspring was studied using field-collected females fed on rabbits in the laboratory. The number of eggs laid was positively related to maternal engorgement weight in one trial, and larval size (estimated by scutal area) was positively related to maternal engorgement weight in the other. These results suggest a trade-off in number of eggs produced versus average size of offspring, possibly determined during late engorgement. The adults for the two trials were collected from different sites in southern Rhode Island and in different seasons (the fall adults were newly emerged, while the spring adults had presumably lived through the winter), so it is not clear whether these results reflect genetic differences or subtle environmental differences between trials. Percent egg hatch and average fat content of larvae were not related to female engorgement weight. We present a modified method to measure lipid content of pooled larval ticks.
Effect of historic land cover change on runoff curve number estimation in Iowa
Wehmeyer, Loren L.; Weirich, Frank H.
2010-01-01
Within three decades of European-descended settlers arriving in Iowa, much of the land cover across the state was transformed from prairie and forest to farmland, patches of forest, and urbanized areas. Between 1832 and 1859, the General Land Office surveyed the state of Iowa to aid in the disbursement of land. In 1875, an illustrated atlas of the State of Iowa was published. Using these two data resources for classifying land cover, the hydrologic impact of the land cover change resulting from the first three decades of settlement is presented in terms of the effect on the area-weighted average curve number, a term commonly used to predict runoff from rainstorms. In the four watersheds studied, the area-weighted average curve number increased by a mean of 16.4 from 61.4 to 77.8 with the greatest magnitude of change occurring in the two western Iowa watersheds as opposed to the two more heavily forested eastern Iowa watersheds.
Zayek, Michael M; Eyal, Fabien G; Smith, Robert C
2018-01-01
To compare the pharmacy costs of calfactant (Infasurf, ONY, Inc.) and poractant alfa (Curosurf, Chiesi USA, Inc., Cary, NC). The University of South Alabama Children's and Women's Hospital switched from calfactant to poractant alfa in 2013 and back to calfactant in 2015. Retrospectively, we used deidentified data from pharmacy records that provided type of surfactant administered, gestational age, birth weight, and number of doses on each patient. We examined differences in the number of doses by gestational ages and the differences in costs by birth weight cohorts because cost per dose is based on weight. There were 762 patients who received calfactant and 432 patients who received poractant alfa. The average number of doses required per patient was 1.6 administrations for calfactant-treated patients and 1.7 administrations for poractant alfa-treated patients, p = 0.03. A higher percentage of calfactant patients needed only 1 dose (53%) than poractant alfa patients (47%). The distribution of the number of doses for calfactant-treated patients was significantly lower than for the poractant alfa-patients, p < 0.001. Gestational age had no consistent effect on the number of doses required for either calfactant or poractant alfa. Per patient cost was higher for poractant alfa than for calfactant in all birth weight cohorts. Average per patient cost was $1160.62 for poractant alfa, 38% higher than the average per patient cost for calfactant ($838.34). Using poractant alfa for 22 months is estimated to have cost $202,732.75 more than it would have cost if the hospital had continued using calfactant. Our experience showed a strong pharmacoeconomic advantage for the use of calfactant compared to the use of poractant alfa because of similar average dosing and lower per patient drug costs.
2012-03-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YY) 2. REPORT TYPE 3. DATES COVERED (From...13. SUPPLEMENTARY NOTES Report contains color. PA Case Number: 88ABW-2012-1688; Clearance Date: 23 Mar 2012. See also Volume 1 , AFRL-RZ-WP-TR
2016-08-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...urrendy valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) , 2. REPORT TYPE 3. DATES COVERED (From...NUMBER (Include area code) 919-282-1050 Standard Form 298 (Rev. 8198) Pntscnbed by ANSI Std. Z39.18 Cost & Performance Report 58XX i COST
Simulation, Control, and Applications for Flow and Scattering Problems
2015-04-10
S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING ORGANIZATION REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Zhilin ...Li Zhilin Li, Kazufumi Ito 611102 c. THIS PAGE The public reporting burden for this collection of information is estimated to average 1 hour per...2011 08/10/2012 08/10/2012 08/10/2012 08/10/2012 Received Paper 1.00 4.00 3.00 2.00 5.00 8.00 7.00 6.00 X. Yang , X. Zhang, Z. Li, G. He. A smoothing
Solar Resource Assessment for Sri Lanka and Maldives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renne, D.; George, R.; Marion, B.
2003-08-01
The countries of Sri Lanka and the Maldives lie within the equatorial belt, a region where substantial solar energy resources exist throughout much of the year in adequate quantities for many applications, including solar water heating, solar electricity, and desalination. The extent of solar resources in Sri Lanka has been estimated in the past based on a study of the daily total direct sunshine hours recorded at a number of weather and agricultural stations throughout the country. These data have been applied to the well-known Angstrom relationship in order to obtain an estimate of the distribution of monthly average dailymore » total solar resources at these stations. This study is an effort in improve on these estimates in two ways: (1) to apply a gridded cloud cover database at a 40-km resolution to produce updated monthly average daily total estimates of all solar resources (global horizontal, DNI, and diffuse) for the country, and (2) to input hourly or three-hourly cloud cover observations made at nine weather stations in Sri Lanka and two in the Maldives into a solar model that produces estimates of hourly solar radiation values of the direct normal, global, and diffuse resource covering the length of the observational period. Details and results of these studies are summarized in this report.« less
Association of Many Regions of the Bacillus subtilis Chromosome with the Cell Membrane
Ivarie, Robert D.; Pène, Jacques J.
1973-01-01
Unsheared lysates of Bacillus subtilis 168T− containing uniformly labeled deoxyribonucleic acid (DNA) were exposed to varying doses of gamma rays to introduce double-strand scissions in the chromosome. From an estimate of the number-average molecular weight and the amount of DNA bound to membrane after irradiation, about 70 to 90 regions of the bacterial chromosome were detected in membrane fractions. Since this number was independent of the molecular weight of the DNA (i.e., the extent of fragmentation of the chromosome), it is thought to represent an upper limit in the number of membrane-binding sites per chromosome. PMID:4196245
A BDDC Algorithm with Deluxe Scaling for Three-Dimensional H (curl) Problems
Dohrmann, Clark R.; Widlund, Olof B.
2015-04-28
In our paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/his the maximum of Hi/hi over all subdomains, where Hi and hi are the diameter and the smallest element diameter for the subdomain Ωi. The analysis makes use of two recent developments. The first is our new approach to averaging across the subdomain interfaces, while the second is amore » new technical tool that allows arguments involving trace classes to be avoided. Furthermore, numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.« less
A refined method for multivariate meta-analysis and meta-regression.
Jackson, Daniel; Riley, Richard D
2014-02-20
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.
Energy estimation of inclined air showers with help of detector responses
NASA Astrophysics Data System (ADS)
Dedenko, L. G.; Fedorova, G. F.; Fedunin, E. Yu.; Glushkov, A. V.; Kolosov, V. A.; Podgrudkov, D. A.; Pravdin, M. I.; Roganova, T. M.; Sleptsov, I. E.
2004-11-01
The method of groups of muons have been suggested to estimate the detector responses for the inclined giant air shower in terms of quark-gluon string model with the geomagnetic field taken into account. Groups are average numbers of muons of positive or negative sign in small intervals of energy, height production and direction of motion in the atmosphere estimated with help of transport equations. For every group a relativistic equation of motion has been solved with geomagnetic field and ionization losses taken into account. The response of a detector and arrival time for every group which strike a detector has been estimated. The energy of the inclined giant air shower estimated with help of calculated responses and the data observed at the Yakutsk array happens to be above 10 20 eV.
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel
2016-10-01
Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Chappell, Lori J.; Wang, Minli; Kim, Myung-Hee
2011-01-01
The uncertainties in estimating the health risks from galactic cosmic rays (GCR) and solar particle events (SPE) are a major limitation to the length of space missions and the evaluation of potential risk mitigation approaches. NASA limits astronaut exposures to a 3% risk of exposure induced cancer death (REID), and protects against uncertainties in risks projections using an assessment of 95% confidence intervals after propagating the error from all model factors (environment and organ exposure, risk coefficients, dose-rate modifiers, and quality factors). Because there are potentially significant late mortality risks from diseases of the circulatory system and central nervous system (CNS) which are less well defined than cancer risks, the cancer REID limit is not necessarily conservative. In this report, we discuss estimates of lifetime risks from space radiation and new estimates of model uncertainties are described. The key updates to the NASA risk projection model are: 1) Revised values for low LET risk coefficients for tissue specific cancer incidence, with incidence rates transported to an average U.S. population to estimate the probability of Risk of Exposure Induced Cancer (REIC) and REID. 2) An analysis of smoking attributable cancer risks for never-smokers that shows significantly reduced lung cancer risk as well as overall cancer risks from radiation compared to risk estimated for the average U.S. population. 3) Derivation of track structure based quality functions depends on particle fluence, charge number, Z and kinetic energy, E. 4) The assignment of a smaller maximum in quality function for leukemia than for solid cancers. 5) The use of the ICRP tissue weights is shown to over-estimate cancer risks from SPEs by a factor of 2 or more. Summing cancer risks for each tissue is recommended as a more accurate approach to estimate SPE cancer risks. 6) Additional considerations on circulatory and CNS disease risks. Our analysis shows that an individual s history of smoking exposure has a larger impact on GCR risk estimates than amounts of radiation shielding or age at exposure (amongst adults). Risks for never-smokers compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for never-smokers, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity and esophagus, and leukemia. The relative contribution of CNS risks to the overall space radiation detriment is potentially increased for never-smokers such as most astronauts. Problems in estimating risks for former smokers and the influence of second-hand smoke are discussed. Compared to the LET approximation, the new track structure derived radiation quality functions lead to a reduced risk for relativistic energy particles and increased risks for intermediate energy particles. Revised estimates for the number of safe days in space at solar minimum for heavy shielding conditions are described for never-smokers and the average U.S. population. Results show that missions to near Earth asteroids (NEA) or Mars violate NASA's radiation safety standards with the current levels of uncertainties. Greater improvements in risk estimates for never-smokers are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).
HIV prevention costs and their predictors: evidence from the ORPHEA Project in Kenya
Galárraga, Omar; Wamai, Richard G; Sosa-Rubí, Sandra G; Mugo, Mercy G; Contreras-Loya, David; Bautista-Arredondo, Sergio; Nyakundi, Helen; Wang’ombe, Joseph K
2017-01-01
Abstract We estimate costs and their predictors for three HIV prevention interventions in Kenya: HIV testing and counselling (HTC), prevention of mother-to-child transmission (PMTCT) and voluntary medical male circumcision (VMMC). As part of the ‘Optimizing the Response of Prevention: HIV Efficiency in Africa’ (ORPHEA) project, we collected retrospective data from government and non-governmental health facilities for 2011–12. We used multi-stage sampling to determine a sample of health facilities by type, ownership, size and interventions offered totalling 144 sites in 78 health facilities in 33 districts across Kenya. Data sources included key informants, registers and time-motion observation methods. Total costs of production were computed using both quantity and unit price of each input. Average cost was estimated by dividing total cost per intervention by number of clients accessing the intervention. Multivariate regression methods were used to analyse predictors of log-transformed average costs. Average costs were $7 and $79 per HTC and PMTCT client tested, respectively; and $66 per VMMC procedure. Results show evidence of economies of scale for PMTCT and VMMC: increasing the number of clients per year by 100% was associated with cost reductions of 50% for PMTCT, and 45% for VMMC. Task shifting was associated with reduced costs for both PMTCT (59%) and VMMC (54%). Costs in hospitals were higher for PMTCT (56%) in comparison to non-hospitals. Facilities that performed testing based on risk factors as opposed to universal screening had higher HTC average costs (79%). Lower VMMC costs were associated with availability of male reproductive health services (59%) and presence of community advisory board (52%). Aside from increasing production scale, HIV prevention costs may be contained by using task shifting, non-hospital sites, service integration and community supervision. PMID:29029086
Towards Photoplethysmography-Based Estimation of Instantaneous Heart Rate During Physical Activity.
Jarchi, Delaram; Casson, Alexander J
2017-09-01
Recently numerous methods have been proposed for estimating average heart rate using photoplethysmography (PPG) during physical activity, overcoming the significant interference that motion causes in PPG traces. We propose a new algorithm framework for extracting instantaneous heart rate from wearable PPG and Electrocardiogram (ECG) signals to provide an estimate of heart rate variability during exercise. For ECG signals, we propose a new spectral masking approach which modifies a particle filter tracking algorithm, and for PPG signals constrains the instantaneous frequency obtained from the Hilbert transform to a region of interest around a candidate heart rate measure. Performance is verified using accelerometry and wearable ECG and PPG data from subjects while biking and running on a treadmill. Instantaneous heart rate provides more information than average heart rate alone. The instantaneous heart rate can be extracted during motion to an accuracy of 1.75 beats per min (bpm) from PPG signals and 0.27 bpm from ECG signals. Estimates of instantaneous heart rate can now be generated from PPG signals during motion. These estimates can provide more information on the human body during exercise. Instantaneous heart rate provides a direct measure of vagal nerve and sympathetic nervous system activity and is of substantial use in a number of analyzes and applications. Previously it has not been possible to estimate instantaneous heart rate from wrist wearable PPG signals.
Estimation of the cost of using chemical protective clothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwope, A.D.; Renard, E.R.
1993-01-01
The U.S. Environmental Protection Agency, either directly or through its Superfund contractors, is a major user of chemical protective clothing. The purpose of the study was to develop estimates for the cost of using this clothing. These estimates can be used to guide purchase decisions and use practices. For example, economic guidelines would assist in decisions pertinent to single-use versus reusable clothing. Eight cost elements were considered: (1) purchase cost, (2) the number of times an item is used, (3) the number of items used per day, (4) cost of decontamination, (5) cost of inspection, (6) cost of maintenance, (7)more » cost of storage, and (8) cost of disposal. Estimates or assumed inputs for each of these elements were developed based on labor costs, fixed costs, and recurring costs. The cost elements were combined into an economic (mathematical) model having the single output of cost/use. By comparing cost/use for various use scenarios, conclusions are readily reached as to the optimum economics for purchase, use, and reuse of the clothing. In general, clothing should be considered disposable if its purchase cost is less than its average cost/use per use for the anticipated number of times it will be reused.« less
Assessing the impact of biomedical research in academic institutions of disparate sizes
2009-01-01
Background The evaluation of academic research performance is nowadays a priority issue. Bibliometric indicators such as the number of publications, total citation counts and h-index are an indispensable tool in this task but their inherent association with the size of the research output may result in rewarding high production when evaluating institutions of disparate sizes. The aim of this study is to propose an indicator that may facilitate the comparison of institutions of disparate sizes. Methods The Modified Impact Index (MII) was defined as the ratio of the observed h-index (h) of an institution over the h-index anticipated for that institution on average, given the number of publications (N) it produces i.e. (α and β denote the intercept and the slope, respectively, of the line describing the dependence of the h-index on the number of publications in log10 scale). MII values higher than 1 indicate that an institution performs better than the average, in terms of its h-index. Data on scientific papers published during 2002–2006 and within 36 medical fields for 219 Academic Medical Institutions from 16 European countries were used to estimate α and β and to calculate the MII of their total and field-specific production. Results From our biomedical research data, the slope β governing the dependence of h-index on the number of publications in biomedical research was found to be similar to that estimated in other disciplines (≈0.4). The MII was positively associated with the average number of citations/publication (r = 0.653, p < 0.001), the h-index (r = 0.213, p = 0.002), the number of publications with ≥ 100 citations (r = 0.211, p = 0.004) but not with the number of publications (r = -0.020, p = 0.765). It was the most highly associated indicator with the share of country-specific government budget appropriations or outlays for research and development as % of GDP in 2004 (r = 0.229) followed by the average number of citations/publication (r = 0.153) whereas the corresponding correlation coefficient for the h-index was close to 0 (r = 0.029). MII was calculated for first 10 top-ranked European universities in life sciences and biomedicine, as provided by Times Higher Education ranking system, and their total and field-specific performance was compared. Conclusion The MII should complement the use of h-index when comparing the research output of institutions of disparate sizes. It has a conceptual interpretation and, with the data provided here, can be computed for the total research output as well as for field-specific publication sets of institutions in biomedicine. PMID:19480665
Assessing the impact of biomedical research in academic institutions of disparate sizes.
Sypsa, Vana; Hatzakis, Angelos
2009-05-29
The evaluation of academic research performance is nowadays a priority issue. Bibliometric indicators such as the number of publications, total citation counts and h-index are an indispensable tool in this task but their inherent association with the size of the research output may result in rewarding high production when evaluating institutions of disparate sizes. The aim of this study is to propose an indicator that may facilitate the comparison of institutions of disparate sizes. The Modified Impact Index (MII) was defined as the ratio of the observed h-index (h) of an institution over the h-index anticipated for that institution on average, given the number of publications (N) it produces i.e. MII = h/10alphaNbeta (alpha and beta denote the intercept and the slope, respectively, of the line describing the dependence of the h-index on the number of publications in log10 scale). MII values higher than 1 indicate that an institution performs better than the average, in terms of its h-index. Data on scientific papers published during 2002-2006 and within 36 medical fields for 219 Academic Medical Institutions from 16 European countries were used to estimate alpha and beta and to calculate the MII of their total and field-specific production. From our biomedical research data, the slope beta governing the dependence of h-index on the number of publications in biomedical research was found to be similar to that estimated in other disciplines ( approximately 0.4). The MII was positively associated with the average number of citations/publication (r = 0.653, p < 0.001), the h-index (r = 0.213, p = 0.002), the number of publications with > or = 100 citations (r = 0.211, p = 0.004) but not with the number of publications (r = -0.020, p = 0.765). It was the most highly associated indicator with the share of country-specific government budget appropriations or outlays for research and development as % of GDP in 2004 (r = 0.229) followed by the average number of citations/publication (r = 0.153) whereas the corresponding correlation coefficient for the h-index was close to 0 (r = 0.029). MII was calculated for first 10 top-ranked European universities in life sciences and biomedicine, as provided by Times Higher Education ranking system, and their total and field-specific performance was compared. The MII should complement the use of h-index when comparing the research output of institutions of disparate sizes. It has a conceptual interpretation and, with the data provided here, can be computed for the total research output as well as for field-specific publication sets of institutions in biomedicine.
Safety modeling of urban arterials in Shanghai, China.
Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul
2015-10-01
Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Some structural features of the teichuronic acid of Bacillus licheniformis N.C.T.C. 6346 cell walls
Hughes, R. C.; Thurman, P. F.
1970-01-01
A teichuronic acid, containing glucuronic acid and N-acetylgalactosamine, was purified from acid extracts of Bacillus licheniformis 6346 cell walls as described by Janczura, Perkins & Rogers (1961). After reduction of the carboxyl function of glucuronic acid residues in the polysaccharide the reduced polymer contains equimolar amounts of N-acetylgalactosamine and glucose. Methylation of the reduced polysaccharide by the Hakamori (1964) technique showed the glucose residues to be substituted on C-4. A disaccharide, 3-O-glucuronosylgalactosamine, was isolated from partial acid hydrolysates of teichuronic acid. After N-acetylation the disaccharide produces chromogen readily on heating at pH7, in agreement with C-3 substitution of the reducing N-acetylamino sugar. Teichuronic acid also produces chromogen under the same conditions, with concurrent elimination of a modified polysaccharide from C-3 of reducing terminal N-acetylgalactosamine residues of the teichuronic acid chains. The number-average chain lengths of several preparations of teichuronic acid were estimated from the amounts of chromogen produced in comparison with the N-acetylated disaccharide. The values obtained are in good agreement with the weight-average molecular weight determined by ultracentrifugal analysis. The reducing terminals of teichuronic acid are shown to be exclusively N-acetylgalactosamine by reduction with sodium boro[3H]hydride. The number-average chain lengths of the teichuronic acid preparations were estimated by the extent of in corporation of tritium and are in agreement with values obtained by the other methods. PMID:5419741
NASA Astrophysics Data System (ADS)
Ren, Shuangpo; Gragg, Samuel; Zhang, Ye; Carr, Bradley J.; Yao, Guangqing
2018-06-01
Fractured crystalline aquifers of mountain watersheds may host a significant portion of the world's freshwater supply. To effectively utilize water resources in these environments, it is important to understand the hydraulic properties, groundwater storage, and flow processes in crystalline aquifers and field-derived insights are critically needed. Based on borehole hydraulic characterization and monitoring data, this study inferred hydraulic properties and groundwater flow of a crystalline fractured aquifer in Laramie Range, Wyoming. At three open holes completed in a fractured granite aquifer, both slug tests and FLUTe liner profiling were performed to obtain estimates of horizontal hydraulic conductivity (Kh). Televiewer (i.e., optical and acoustic) and flowmeter logs were then jointly interpreted to identify the number of flowing fractures and fracture zones. Based on these data, hydraulic apertures were obtained for each borehole. Average groundwater velocity was then computed using Kh, aperture, and water level monitoring data. Finally, based on all available data, including cores, borehole logs, LIDAR topography, and a seismic P-wave velocity model, a three dimensional geological model of the site was built. In this fractured aquifer, (1) borehole Kh varies over ∼4 orders of magnitude (10-8-10-5 m/s). Kh is consistently higher near the top of the bedrock that is interpreted as the weathering front. Using a cutoff Kh of 10-10 m/s, the hydraulically significant zone extends to ∼40-53 m depth. (2) FLUTe-estimated hydraulic apertures of fractures vary over 1 order of magnitude, and at each borehole, the average hydraulic aperture by FLUTe is very close to that obtained from slug tests. Thus, slug test can be used to provide a reliable estimate of the average fracture hydraulic aperture. (3) Estimated average effective fracture porosity is 4.0 × 10-4, therefore this fractured aquifer can host significant quantity of water. (4) Natural groundwater velocity is estimated to range from 0.4 to 81.0 m/day, implying rapid pathways of fracture flow. (5) The average ambient water table position follows the boundary between saprolite and fractured bedrock. Groundwater flow at the site appears topography driven.
NASA Astrophysics Data System (ADS)
Ryu, B. Y.; Jung, H. J.; Bae, S. H.; Choi, C. U.
2013-12-01
CO2 emissions on roads in urban centers substantially affect global warming. It is important to quantify CO2 emissions in terms of the link unit in order to reduce these emissions on the roads. Therefore, in this study, we utilized real-time traffic data and attempted to develop a methodology for estimating CO2 emissions per link unit. Because of the recent development of the vehicle-to-infrastructure (V2I) communication technology, data from probe vehicles (PVs) can be collected and speed per link unit can be calculated. Among the existing emission calculation methodologies, mesoscale modeling, which is a representative modeling measurement technique, requires speed and traffic data per link unit. As it is not feasible to install fixed detectors at every link for traffic data collection, in this study, we developed a model for traffic volume estimation by utilizing the number of PVs that can be additionally collected when the PV data are collected. Multiple linear regression and an artificial neural network (ANN) were used for estimating the traffic volume. The independent variables and input data for each model are the number of PVs, travel time index (TTI), the number of lanes, and time slots. The result from the traffic volume estimate model shows that the mean absolute percentage error (MAPE) of the ANN is 18.67%, thus proving that it is more effective. The ANN-based traffic volume estimation served as the basis for the calculation of emissions per link unit. The daily average emissions for Daejeon, where this study was based, were 2210.19 ton/day. By vehicle type, passenger cars accounted for 71.28% of the total emissions. By road, Gyeryongro emitted 125.48 ton/day, accounting for 5.68% of the total emission, the highest percentage of all roads. In terms of emissions per kilometer, Hanbatdaero had the highest emission volume, with 7.26 ton/day/km on average. This study proves that real-time traffic data allow an emissions estimate in terms of the link unit. Furthermore, an analysis of CO2 emissions can support traffic management to make decisions related to the reduction of carbon emissions.
[Travel time and distances to Norwegian out-of-hours casualty clinics].
Raknes, Guttorm; Morken, Tone; Hunskår, Steinar
2014-11-01
Geographical factors have an impact on the utilisation of out-of-hours services. In this study we have investigated the travel distance to out-of-hours casualty clinics in Norwegian municipalities in 2011 and the number of municipalities covered by the proposed recommendations for secondary on-call arrangements due to long distances. We estimated the average maximum travel times and distances in Norwegian municipalities using a postcode-based method. Separate analyses were performed for municipalities with a single, permanently located casualty clinic. Altogether 417 out of 430 municipalities were included. We present the median value of the maximum travel times and distances for the included municipalities. The median maximum average travel distance for the municipalities was 19 km. The median maximum average travel time was 22 minutes. In 40 of the municipalities (10 %) the median maximum average travel time exceeded 60 minutes, and in 97 municipalities (23 %) the median maximum average travel time exceeded 40 minutes. The population of these groups comprised 2 % and 5 % of the country's total population respectively. For municipalities with permanent emergency facilities(N = 316), the median average flight time 16 minutes and median average distance 13 km.. In many municipalities, the inhabitants have a long average journey to out-of-hours emergency health services, but seen as a whole, the inhabitants of these municipalities account for a very small proportion of the Norwegian population. The results indicate that the proposed recommendations for secondary on-call duty based on long distances apply to only a small number of inhabitants. The recommendations should therefore be adjusted and reformulated to become more relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoen, Ben; Cappers, Peter; Wiser, Ryan
2011-04-19
An increasing number of homes in the U.S. have sold with photovoltaic (PV) energy systems installed at the time of sale, yet relatively little research exists that estimates the marginal impacts of those PV systems on home sale prices. A clearer understanding of these possible impacts might influence the decisions of homeowners considering the installation of a PV system, homebuyers considering the purchase of a home with PV already installed, and new home builders considering including PV as an optional or standard product on their homes. This research analyzes a large dataset of California homes that sold from 2000 throughmore » mid-2009 with PV installed. It finds strong evidence that homes with PV systems sold for a premium over comparable homes without PV systems during this time frame. Estimates for this premium expressed in dollars per watt of installed PV range, on average, from roughly $4 to $5.5/watt across a large number of hedonic and repeat sales model specifications and robustness tests. When expressed as a ratio of the sales price premium of PV to estimated annual energy cost savings associated with PV, an average ratio of 14:1 to 19:1 can be calculated; these results are consistent with those of the more-extensive existing literature on the impact of energy efficiency on sales prices. When the data are split among new and existing homes, however, PV system premiums are markedly affected. New homes with PV show premiums of $2.3-2.6/watt, while existing homes with PV show premiums of more than $6/watt. Reasons for this discrepancy are suggested, yet further research is warranted. A number of other areas where future research would be useful are also highlighted.« less
Bi, Qifang; Ferreras, Eva; Pezzoli, Lorenzo; Legros, Dominique; Ivers, Louise C; Date, Kashmira; Qadri, Firdausi; Digilio, Laura; Sack, David A; Ali, Mohammad; Lessler, Justin; Luquero, Francisco J; Azman, Andrew S
2017-10-01
Killed whole-cell oral cholera vaccines (kOCVs) are becoming a standard cholera control and prevention tool. However, vaccine efficacy and direct effectiveness estimates have varied, with differences in study design, location, follow-up duration, and vaccine composition posing challenges for public health decision making. We did a systematic review and meta-analysis to generate average estimates of kOCV efficacy and direct effectiveness from the available literature. For this systematic review and meta-analysis, we searched PubMed, Embase, Scopus, and the Cochrane Review Library on July 9, 2016, and ISI Web of Science on July 11, 2016, for randomised controlled trials and observational studies that reported estimates of direct protection against medically attended confirmed cholera conferred by kOCVs. We included studies published on any date in English, Spanish, French, or Chinese. We extracted from the published reports the primary efficacy and effectiveness estimates from each study and also estimates according to number of vaccine doses, duration, and age group. The main study outcome was average efficacy and direct effectiveness of two kOCV doses, which we estimated with random-effect models. This study is registered with PROSPERO, number CRD42016048232. Seven trials (with 695 patients with cholera) and six observational studies (217 patients with cholera) met the inclusion criteria, with an average two-dose efficacy of 58% (95% CI 42-69, I 2 =58%) and effectiveness of 76% (62-85, I 2 =0). Average two-dose efficacy in children younger than 5 years (30% [95% CI 15-42], I 2 =0%) was lower than in those 5 years or older (64% [58-70], I 2 =0%; p<0·0001). Two-dose efficacy estimates of kOCV were similar during the first 2 years after vaccination, with estimates of 56% (95% CI 42-66, I 2 =45%) in the first year and 59% (49-67, I 2 =0) in the second year. The efficacy reduced to 39% (13 to 57, I 2 =48%) in the third year, and 26% (-46 to 63, I 2 =74%) in the fourth year. Two kOCV doses provide protection against cholera for at least 3 years. One kOCV dose provides at least short-term protection, which has important implications for outbreak management. kOCVs are effective tools for cholera control. The Bill & Melinda Gates Foundation. Copyright This is an Open Access article published under the CC BY 3.0 IGO license which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In any use of this article, there should be no suggestion that WHO endorses any specific organisation, products, or services. The use of the WHO logo is not permitted. This notice should be preserved along with the article's original URL.
42 CFR 447.255 - Related information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... assurances described in § 447.253(a), the following information: (a) The amount of the estimated average... which that estimated average rate increased or decreased relative to the average payment rate in effect... and, to the extent feasible, long-term effect the change in the estimated average rate will have on...
Americans misperceive racial economic equality
Kraus, Michael W.; Rucker, Julian M.; Richeson, Jennifer A.
2017-01-01
The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies (n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black–White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites’ estimates of Black–White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality—a misperception that is likely to have any number of important policy implications. PMID:28923915
Americans misperceive racial economic equality.
Kraus, Michael W; Rucker, Julian M; Richeson, Jennifer A
2017-09-26
The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies ( n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black-White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites' estimates of Black-White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality-a misperception that is likely to have any number of important policy implications.
NASA Technical Reports Server (NTRS)
Vanlunteren, A.
1977-01-01
A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.
Genetic analyses of stillbirth in relation to litter size using random regression models.
Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M
2010-12-01
Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
Heo, Seo Weon; Kim, Hyungsuk
2010-05-01
An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.
Pega, Frank; Gilsanz, Paola; Kawachi, Ichiro; Wilson, Nick; Blakely, Tony
2017-04-01
The effect of anti-poverty tax credit interventions on tobacco consumption is unclear. Previous studies have estimated short-term effects, did not isolate the effects of cumulative dose of tax credits, produced conflicting results, and used methods with limited control for some time-varying confounders (e.g., those affected by prior treatment) and treatment regimen (i.e., study participants' tax credit receipt pattern over time). We estimated the longer-term, cumulative effect of New Zealand's Family Tax Credit (FTC) on tobacco consumption, using a natural experiment (administrative errors leading to exogenous variation in FTC receipt) and methods specifically for controlling confounding, reverse causation, and treatment regimen. We extracted seven waves (2002-2009) of the nationally representative Survey of Family, Income and Employment including 4404 working-age (18-65 years) parents in families. The exposure was the total numbers of years of receiving FTC. The outcomes were regular smoking and the average daily number of cigarettes usually smoked at wave 7. We estimated average treatment effects using inverse probability of treatment weighting and marginal structural modelling. Each additional year of receiving FTC affected neither the odds of regular tobacco smoking among all parents (odds ratio 1.02, 95% confidence interval 0.94-1.11), nor the number of cigarettes smoked among parents who smoked regularly (rate ratio 1.01, 95% confidence interval 0.99-1.03). We found no evidence for an association between the cumulative number of years of receiving an anti-poverty tax credit and tobacco smoking or consumption among parents. The assumptions of marginal structural modelling are quite demanding, and we therefore cannot rule out residual confounding. Nonetheless, our results suggest that tax credit programme participation will not increase tobacco consumption among poor parents, at least in this high-income country. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prevention of Blast-Related Injuries
2013-07-01
collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE...Introduction 4 Statement of Work 4 Task I Report 4 1 . Adjustment of the experimental design and methodology 4 2. Preparations for Blast
2008-10-08
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 08 OCT 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to... 1 Issue for Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Scope, Sources, and
2008-07-10
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 10 JUL 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00... 1 Issue for Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Scope, Sources, and
2008-11-19
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...information if it does not display a currently valid OMB control number. 1 . REPORT DATE 19 NOV 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to... 1 Issue for Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Scope, Sources, and
Nitrogen Rings and Cages Stabilized by Metallic Atoms
2009-12-18
1 UNIVERSIDAD DE GUANAJUATO DEPARTAMENTO DE QUIMICA DIVISION DE CIENCIAS NATURALES Y EXACTAS Dr. Gabriel Merino NORIA ALTA s/n...this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT
How Many Days Are Enough? A Study of 365 Days of Pedometer Monitoring
ERIC Educational Resources Information Center
Kang, Minsoo; Bassett, David R.; Barreira, Tiago V.; Tudor-Locke, Catrine; Ainsworth, Barbara; Reis, Jared P.; Strath, Scott; Swartz, Ann
2009-01-01
This study was designed to determine the number of days of pedometer monitoring necessary to achieve reliable and valid estimates of a 1-year average of step counts in adults based on either consecutive days (CD) or random days (RD) of data collection. Twenty-three participants (16 women; M age = 38 years, SD = 9.9) wore a Yamax SW 200 pedometer…
Test Operations Procedure (TOP) 2-2-650 Engine Cold-Starting and Warm-Up Tests
2008-02-12
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 ...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 12-02-2008 2. REPORT...Page Paragraph 1 . SCOPE................................................................................ 2 2. FACILITIES AND INSTRUMENTATION
75 FR 45636 - Animal Generic Drug User Fee Rates and Payment Procedures for Fiscal Year 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... . (The Pay.gov payment option is available to you after you submit a cover sheet. Click the ``Pay Now... that the number of applications that will pay fees in FY 2011 will equal 30 percent less than the... average receipts of 14.4 per year over the latest 5 years, including our FY 2010 estimate. Applying a 30...
Trauma Informed Guilt Reduction (TrIGR) Intervention
2017-10-01
AWARD NUMBER: W81XWH-15-1-0330 TITLE: Trauma- Informed Guilt Reduction (TrIGR) Intervention PRINCIPAL INVESTIGATOR: Sonya Norman, PhD CONTRACTING...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments
Home range use by swamp rabbits (Sylvilagus aquaticus) in a frequently inundated bottomland forest
Patrick A. Zollner; Winston P. Smith; Leonard A. Brennan
2000-01-01
Home range size of six swamp rabbits in south-central Arkansas was estilnated by radio-telemetry from February 1991 through March 1992. The average home range size was significantly larger than previously reported estimates. This difference is partly attributable to the large number of observations per rabbit in our study, but may also be explained by our inclusion of...
Limiting Regret: Building the Army We Will Need
2015-08-01
burden for the collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing...NUMBER OF PAGES 23 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298...Army’s ability to help execute the national defense strategy against key threats. The documentation for this analysis was funded by philanthropic
2009-10-01
Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE OCT 2009 2...xvi E. 1 . Research Approach .........................................................................................................xvi E.2
A Community Terrain-Following Ocean Modeling System (ROMS/TOMS)
2011-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Community Terrain-Following Ocean Modeling System (ROMS...732) 932-6555 x266 Fax: (732) 932-6520 email: arango@marine.rutgers.edu Award Number: N00014-10- 1 -0322 http://ocean-modeling.org http...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Sustainable Transportation: Strategy for Security, Prosperity, and Peace
2014-11-01
number. 1 . REPORT DATE NOV 2014 2. REPORT TYPE 3 . DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Sustainable Transportation...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...licensed Professional Engineer in the state of Virginia. viii 1 SUSTAINABLE TRANSPORTATION: STRATEGY FOR SECURITY, PROSPERITY, AND PEACE Introduction
Herculano-Houzel, Suzana
2011-01-01
It is usually considered that larger brains have larger neurons, which consume more energy individually, and are therefore accompanied by a larger number of glial cells per neuron. These notions, however, have never been tested. Based on glucose and oxygen metabolic rates in awake animals and their recently determined numbers of neurons, here I show that, contrary to the expected, the estimated glucose use per neuron is remarkably constant, varying only by 40% across the six species of rodents and primates (including humans). The estimated average glucose use per neuron does not correlate with neuronal density in any structure. This suggests that the energy budget of the whole brain per neuron is fixed across species and brain sizes, such that total glucose use by the brain as a whole, by the cerebral cortex and also by the cerebellum alone are linear functions of the number of neurons in the structures across the species (although the average glucose consumption per neuron is at least 10× higher in the cerebral cortex than in the cerebellum). These results indicate that the apparently remarkable use in humans of 20% of the whole body energy budget by a brain that represents only 2% of body mass is explained simply by its large number of neurons. Because synaptic activity is considered the major determinant of metabolic cost, a conserved energy budget per neuron has several profound implications for synaptic homeostasis and the regulation of firing rates, synaptic plasticity, brain imaging, pathologies, and for brain scaling in evolution. PMID:21390261
Herculano-Houzel, Suzana
2011-03-01
It is usually considered that larger brains have larger neurons, which consume more energy individually, and are therefore accompanied by a larger number of glial cells per neuron. These notions, however, have never been tested. Based on glucose and oxygen metabolic rates in awake animals and their recently determined numbers of neurons, here I show that, contrary to the expected, the estimated glucose use per neuron is remarkably constant, varying only by 40% across the six species of rodents and primates (including humans). The estimated average glucose use per neuron does not correlate with neuronal density in any structure. This suggests that the energy budget of the whole brain per neuron is fixed across species and brain sizes, such that total glucose use by the brain as a whole, by the cerebral cortex and also by the cerebellum alone are linear functions of the number of neurons in the structures across the species (although the average glucose consumption per neuron is at least 10× higher in the cerebral cortex than in the cerebellum). These results indicate that the apparently remarkable use in humans of 20% of the whole body energy budget by a brain that represents only 2% of body mass is explained simply by its large number of neurons. Because synaptic activity is considered the major determinant of metabolic cost, a conserved energy budget per neuron has several profound implications for synaptic homeostasis and the regulation of firing rates, synaptic plasticity, brain imaging, pathologies, and for brain scaling in evolution.
NASA Astrophysics Data System (ADS)
Mamounata, K.
2015-12-01
In response to the increasing demand for food linked to the substantial growth of population in Burkina Faso, irrigation has been widely used by the farming community to support agricultural production. Thus a promising option for water resources development in such a context is to increase the number of small dams. It is assumed that the great number of small dams may have effect on sub-basins' hydrological dynamic. This study aims to assess the seasonal and the intra-seasonal change in river basins hydrology with the case study of the Faga River sub-basin located in Burkina-Faso, West Africa, using Water Simulation Model (WaSiM). For this watershed the number of small dams is slightly very important (More than 60) and their impact on the watershed runoff has been estimated simultaneously with the change in climate pattern. The coefficient of variation for rainfall in this sub-basin from 1982 to 2010 is 0.097 and the stream flow presents a seasonal average of 25.58Km3 per month for the same period. The intra-seasonal climate variation for the same period is estimated at 0.087 in the scenario where any dam has not been considered. Results based on simulation including the five important dams over the sub-basin show that the overall effect of small dams is on average a 20.76% in runoff. Projections using the Representative Concentration Pathways (RCP) 4.5 and 8.5 climate scenarios with increase of 25% of dams' number show a probable decrease of about 29.54% and 35.25% of the average during the next fifty years runoff. The study findings show that small dams reduce significantly the runoff from their watershed and the uncertainties related to the sustainability of the resource seems to be increasing during the same period. Therefore, despite the very large number of water storage infrastructures, reservoirs operating strategies have to be achieved for water sustainability within the Faga sub-basin.
The average solar wind in the inner heliosphere: Structures and slow variations
NASA Technical Reports Server (NTRS)
Schwenn, R.
1983-01-01
Measurements from the HELIOS solar probes indicated that apart from solar activity related disturbances there exist two states of the solar wind which might result from basic differences in the acceleration process: the fast solar wind (v 600 kms(-)1) emanating from magnetically open regions in the solar corona and the "slow" solar wind (v 400 kms(-)1) correlated with the more active regions and its mainly closed magnetic structures. In a comprehensive study using all HELIOS data taken between 1974 and 1982 the average behavior of the basic plasma parameters were analyzed as functions of the solar wind speed. The long term variations of the solar wind parameters along the solar cycle were also determined and numerical estimates given. These modulations appear to be distinct though only minor. In agreement with earlier studies it was concluded that the major modulations are in the number and size of high speed streams and in the number of interplanetary shock waves caused by coronal transients. The latter ones usually cause huge deviations from the averages of all parameters.
The effect of normative context variability on recognition memory.
Steyvers, Mark; Malmberg, Kenneth J
2003-09-01
According to some theories of recognition memory (e.g., S. Dennis & M. S. Humphreys, 2001), the number of different contexts in which words appear determines how memorable individual occurrences of words will be: A word that occurs in a small number of different contexts should be better recognized than a word that appears in a larger number of different contexts. To empirically test this prediction, a normative measure is developed, referred to here as context variability, that estimates the number of different contexts in which words appear in everyday life. These findings confirm the prediction that words low in context variability are better recognized (on average) than words that are high in context variability. (c) 2003 APA, all rights reserved
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Use of geographic information systems in rabies vaccination campaigns.
Grisi-Filho, José Henrique de Hildebrand e; Amaku, Marcos; Dias, Ricardo Augusto; Montenegro Netto, Hildebrando; Paranhos, Noemia Tucunduva; Mendes, Maria Cristina Novo Campos; Ferreira Neto, José Soares; Ferreira, Fernando
2008-12-01
To develop a method to assist in the design and assessment of animal rabies control campaigns. A methodology was developed based on geographic information systems to estimate the animal (canine and feline) population and density per census tract and per subregion (known as "Subprefeituras") in the city of São Paulo (Southeastern Brazil) in 2002. The number of vaccination units in a given region was estimated to achieve a certain proportion of vaccination coverage. Census database was used for the human population, as well as estimates ratios of dog:inhabitant and cat:inhabitant. Estimated figures were 1,490,500 dogs and 226,954 cats in the city, i.e. an animal population density of 1138.14 owned animals per km(2). In the 2002 campaign, 926,462 were vaccinated, resulting in a vaccination coverage of 54%. The estimated number of vaccination units to be able to reach a 70%-vaccination coverage, by vaccinating 700 animals per unit on average, was 1,729. These estimates are presented as maps of animal density according to census tracts and "Subprefeituras". The methodology used in the study may be applied in a systematic way to the design and evaluation of rabies vaccination campaigns, enabling the identification of areas of critical vaccination coverage.
Rainfall extremes from TRMM data and the Metastatistical Extreme Value Distribution
NASA Astrophysics Data System (ADS)
Zorzetto, Enrico; Marani, Marco
2017-04-01
A reliable quantification of the probability of weather extremes occurrence is essential for designing resilient water infrastructures and hazard mitigation measures. However, it is increasingly clear that the presence of inter-annual climatic fluctuations determines a substantial long-term variability in the frequency of occurrence of extreme events. This circumstance questions the foundation of the traditional extreme value theory, hinged on stationary Poisson processes or on asymptotic assumptions to derive the Generalized Extreme Value (GEV) distribution. We illustrate here, with application to daily rainfall, a new approach to extreme value analysis, the Metastatistical Extreme Value Distribution (MEVD). The MEVD relaxes the above assumptions and is based on the whole distribution of daily rainfall events, thus allowing optimal use of all available observations. Using a global dataset of rain gauge observations, we show that the MEVD significantly outperforms the Generalized Extreme Value distribution, particularly for long average recurrence intervals and when small samples are available. The latter property suggests MEVD to be particularly suited for applications to satellite rainfall estimates, which only cover two decades, thus making extreme value estimation extremely challenging. Here we apply MEVD to the TRMM TMPA 3B42 product, an 18-year dataset of remotely-sensed daily rainfall providing a quasi-global coverage. Our analyses yield a global scale mapping of daily rainfall extremes and of their distributional tail properties, bridging the existing large gaps in ground-based networks. Finally, we illustrate how our global-scale analysis can provide insight into how properties of local rainfall regimes affect tail estimation uncertainty when using the GEV or MEVD approach. We find a dependence of the estimation uncertainty, for both the GEV- and MEV-based approaches, on the average annual number and on the inter-annual variability of rainy days. In particular, estimation uncertainty decreases 1) as the mean annual number of wet days increases, and 2) as the variability in the number of rainy days, expressed by its coefficient of variation, decreases. We tentatively explain this behavior in terms of the assumptions underlying the two approaches.
Pannullo, Francesca; Lee, Duncan; Neal, Lucy; Dalvi, Mohit; Agnew, Paul; O'Connor, Fiona M; Mukhopadhyay, Sabyasachi; Sahu, Sujit; Sarran, Christophe
2017-03-27
Estimating the long-term health impact of air pollution in a spatio-temporal ecological study requires representative concentrations of air pollutants to be constructed for each geographical unit and time period. Averaging concentrations in space and time is commonly carried out, but little is known about how robust the estimated health effects are to different aggregation functions. A second under researched question is what impact air pollution is likely to have in the future. We conducted a study for England between 2007 and 2011, investigating the relationship between respiratory hospital admissions and different pollutants: nitrogen dioxide (NO 2 ); ozone (O 3 ); particulate matter, the latter including particles with an aerodynamic diameter less than 2.5 micrometers (PM 2.5 ), and less than 10 micrometers (PM 10 ); and sulphur dioxide (SO 2 ). Bayesian Poisson regression models accounting for localised spatio-temporal autocorrelation were used to estimate the relative risks (RRs) of pollution on disease risk, and for each pollutant four representative concentrations were constructed using combinations of spatial and temporal averages and maximums. The estimated RRs were then used to make projections of the numbers of likely respiratory hospital admissions in the 2050s attributable to air pollution, based on emission projections from a number of Representative Concentration Pathways (RCP). NO 2 exhibited the largest association with respiratory hospital admissions out of the pollutants considered, with estimated increased risks of between 0.9 and 1.6% for a one standard deviation increase in concentrations. In the future the projected numbers of respiratory hospital admissions attributable to NO 2 in the 2050s are lower than present day rates under 3 Representative Concentration Pathways (RCPs): 2.6, 6.0, and 8.5, which is due to projected reductions in future NO 2 emissions and concentrations. NO 2 concentrations exhibit consistent substantial present-day health effects regardless of how a representative concentration is constructed in space and time. Thus as concentrations are predicted to remain above limits set by European Union Legislation until the 2030s in parts of urban England, it will remain a substantial health risk for some time.
High cumulants of conserved charges and their statistical uncertainties
NASA Astrophysics Data System (ADS)
Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu
2017-10-01
We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)
Log law of the wall revisited in Taylor-Couette flows at intermediate Reynolds numbers.
Singh, Harminder; Suazo, Claudio Alberto Torres; Liné, Alain
2016-11-01
We provide Reynolds averaged azimuthal velocity profiles, measured in a Taylor-Couette system in turbulent flow, at medium Reynolds (7800 < Re < 18000) number with particle image velocimetry technique. We find that in the wall regions, close to the inner and outer cylinders, the azimuthal velocity profile reveals a significant deviation from classical logarithmic law. In order to propose a new law of the wall, the profile of turbulent mixing length was estimated from data processing; it was shown to behave nonlinearly with the radial wall distance. Based on this turbulent mixing length expression, a law of the wall was proposed for the Reynolds averaged azimuthal velocity, derived from momentum balance and validated by comparison to different data. In addition, the profile of viscous dissipation rate was investigated and compared to the global power needed to maintain the inner cylinder in rotation.
Cyclical absenteeism among private sector, public sector and self-employed workers.
Pfeifer, Christian
2013-03-01
This research note analyzes differences in the number of absent working days and doctor visits and in their cyclicality between private sector, public sector and self-employed workers. For this purpose, I used large-scale German survey data for the years 1995 to 2007 to estimate random effects negative binomial (count data) models. The main findings are as follows. (i) Public sector workers have on average more absent working days than private sector and self-employed workers. Self-employed workers have fewer absent working days and doctor visits than dependent employed workers. (ii) The regional unemployment rate is on average negatively correlated with the number of absent working days among private and public sector workers as well as among self-employed men. The correlations between regional unemployment rate and doctor visits are only significantly negative among private sector workers. Copyright © 2012 John Wiley & Sons, Ltd.
Amanze, Ogbonna O.; La Hera-Fuentes, Gina; Silverman-Retana, Omar; Contreras-Loya, David; Ashefor, Gregory A.; Ogungbemi, Kayode M.
2018-01-01
Objective We estimated the average annual cost per patient of ART per facility (unit cost) in Nigeria, described the variation in costs across facilities, and identified factors associated with this variation. Methods We used facility-level data of 80 facilities in Nigeria, collected between December 2014 and May 2015. We estimated unit costs at each facility as the ratio of total costs (the sum of costs of staff, recurrent inputs and services, capital, training, laboratory tests, and antiretroviral and TB treatment drugs) divided by the annual number of patients. We applied linear regressions to estimate factors associated with ART cost per patient. Results The unit ART cost in Nigeria was $157 USD nationally and the facility-level mean was $231 USD. The study found a wide variability in unit costs across facilities. Variations in costs were explained by number of patients, level of care, task shifting (shifting tasks from doctors to less specialized staff, mainly nurses, to provide ART) and provider´s competence. The study illuminated the potentially important role that management practices can play in improving the efficiency of ART services. Conclusions Our study identifies characteristics of services associated with the most efficient implementation of ART services in Nigeria. These results will help design efficient program scale-up to deliver comprehensive HIV services in Nigeria by distinguishing features linked to lower unit costs. PMID:29718906
An Estimate of the North Atlantic Basin Tropical Cyclone Activity for the 2010 Hurricane Season
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2010-01-01
Estimates are presented for the tropical cyclone activity expected for the 2010 North Atlantic basin hurricane season. It is anticipated that the 2010 season will be more active than the 2009 season, reflecting increased frequencies more akin to that of the current more active phase that has been in vogue since 1995. Averages (+/- 1 sd) during the current more active phase are 14.5+/-4.7, 7.8+/-3.2, 3.7+/-1.8, and 2+/- 2, respectively, for the number of tropical cyclones (NTC), the number of hurricanes (NH), the number of major hurricanes (NMH), and the number of United States (U.S.) land-falling hurricanes (NUSLFH). Based on the "usual" behavior of the 10-yma parametric first differences, one expects NTC = 19+/-2, NH = 14+/-2, NMH = 7+/-2, and NUSLFH = 4+/-2 for the 2010 hurricane season; however, based on the "best guess" 10-yma values of surface-air temperature at the Armagh Observatory (Northern Ireland) and the Oceanic Nino Index, one expects NTC > or equals 16, NH > or equals 14, NMH > or equals 7, and NUSLFH > or equals 6.
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.
1994-05-01
A Picture Archiving and Communications System (PACS) must be able to support the image rate of the medical treatment facility. In addition the PACS must have adequate working storage and archive storage capacity required. The calculation of the number of images per minute and the capacity of working storage and of archiving storage is discussed. The calculation takes into account the distribution of images over the different size of radiological images, the distribution between inpatient and outpatient, and the distribution over plain film CR images and other modality images. The support of the indirect clinical image load is difficult to estimate and is considered in some detail. The result of the exercise for a particular hospital is an estimate of the average size of the images and exams on the system, of the number of gigabytes of working storage, of the number of images moved per minute, of the size of the archive in gigabytes, and of the number of images that are to be moved by the archive per minute. The types of storage required to support the image rates and the capacity required are discussed.
Agarwal, Sunil K.; Wruck, Lisa; Quibrera, Miguel; Matsushita, Kunihiro; Loehr, Laura R.; Chang, Patricia P.; Rosamond, Wayne D.; Wright, Jacqueline; Heiss, Gerardo; Coresh, Josef
2016-01-01
Estimates of the numbers and rates of acute decompensated heart failure (ADHF) hospitalization are central to understanding health-care utilization and efforts to improve patient care. We comprehensively estimated the frequency, rate, and trends of ADHF hospitalization in the United States. Based on Atherosclerosis Risk in Communities (ARIC) Study surveillance adjudicating 12,450 eligible hospitalizations during 2005–2010, we developed prediction models for ADHF separately for 3 International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code 428 discharge diagnosis groups: 428 primary, 428 nonprimary, or 428 absent. We applied the models to data from the National Inpatient Sample (11.5 million hospitalizations of persons aged ≥55 years with eligible ICD-9-CM codes), an all-payer, 20% probability sample of US community hospitals. The average estimated number of ADHF hospitalizations per year was 1.76 million (428 primary, 0.80 million; 428 nonprimary, 0.83 million; 428 absent, 0.13 million). During 1998–2004, the rate of ADHF hospitalization increased by 2.0%/year (95% confidence interval (CI): 1.8, 2.5) versus a 1.4%/year (95% CI: 0.8, 2.1) increase in code 428 primary hospitalizations (P < 0.001). In contrast, during 2005–2011, numbers of ADHF hospitalizations were stable (−0.5%/year; 95% CI: −1.4, 0.3), while the numbers of 428-primary hospitalizations decreased by −1.5%/year (95% CI: −2.2, −0.8) (P for contrast = 0.03). In conclusion, the estimated number of hospitalizations with ADHF is approximately 2 times higher than the number of hospitalizations with ICD-9-CM code 428 in the primary position. The trend increased more steeply prior to 2005 and was relatively flat after 2005. PMID:26895710
Genetic architecture of autosome-mediated hybrid male sterility in Drosophila.
Marín, I
1996-04-01
Several estimators have been developed for assessing the number of sterility factors in a chromosome based on the sizes of fertile and sterile introgressed fragments. Assuming that two factors are required for producing sterility, simulations show that one of these, twice the inverse of the relative size of the largest fertile fragment, provides good average approximations when as few as five fertile fragments are analyzed. The estimators have been used for deducing the number of factors from previous data on several pairs of species. A particular result contrasts with the authors' interpretations: instead of the high number of sterility factors suggested, only a few per autosome are estimated in both reciprocal crosses involving Drosophila buzzatii and D. koepferae. It has been possible to map these factors, between three and six per chromosome, in the autosomes 3 and 4 of these species. Out of 203 introgressions of different fragments or combinations of fragments, the outcome of at least 192 is explained by the mapped zones. These results suggest that autosome-mediated sterility in the male hybrids of these species is mediated by a few epistatic factors, similarly to X-mediated sterility in the hybrids of other Drosophila species.
van Dijk, Jan
2013-01-01
Eradicating disease from livestock populations involves the balancing act of removing sufficient numbers of diseased animals without removing too many healthy individuals in the process. As ever more tests for bovine tuberculosis (BTB) are carried out on the UK cattle herd, and each positive herd test triggers more testing, the question arises whether ‘false positive’ results contribute significantly to the measured BTB prevalence. Here, this question is explored using simple probabilistic models of test behaviour. When the screening test is applied to the average UK herd, the estimated proportion of test-associated false positive new outbreaks is highly sensitive to small fluctuations in screening test specificity. Estimations of this parameter should be updated as a priority. Once outbreaks have been confirmed in screening-test positive herds, the following rounds of intensive testing with more sensitive, albeit less specific, tests are highly likely to remove large numbers of false positive animals from herds. Despite this, it is unlikely that significantly more truly infected animals are removed. BTB test protocols should become based on quantified risk in order to prevent the needless slaughter of large numbers of healthy animals. PMID:23717517
Vladutiu, Catherine J; Casteel, Carri; Nocera, Maryalice; Harrison, Robert; Peek-Asa, Corinne
2016-01-01
In the rapidly growing home health and hospice industry, little is known about workplace violence prevention (WVP) training and violent events. We examined the characteristics of WVP training and estimated violent event rates among 191 home health and hospice care providers from six agencies in California. Training characteristics were identified from the Occupational Safety and Health Administration guidelines. Rates were estimated as the number of violent events divided by the total number of home visit hours. Between 2008 and 2009, 66.5% (n = 127) of providers reported receiving WVP training when newly hired or as recurrent training. On average, providers rated the quality of their training as 5.7 (1 = poor to 10 = excellent). Among all providers, there was an overall rate of 17.1 violent events per 1,000 visit-hours. Efforts to increase the number of home health care workers who receive WVP training and to improve training quality are needed. © 2015 Wiley Periodicals, Inc.
Patient dose estimation from CT scans at the Mexican National Neurology and Neurosurgery Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alva-Sánchez, Héctor, E-mail: halva@ciencias.unam.mx; Reynoso-Mejía, Alberto; Casares-Cruz, Katiuzka
In the radiology department of the Mexican National Institute of Neurology and Neurosurgery, a dedicated institute in Mexico City, on average 19.3 computed tomography (CT) examinations are performed daily on hospitalized patients for neurological disease diagnosis, control scans and follow-up imaging. The purpose of this work was to estimate the effective dose received by hospitalized patients who underwent a diagnostic CT scan using typical effective dose values for all CT types and to obtain the estimated effective dose distributions received by surgical and non-surgical patients. Effective patient doses were estimated from values per study type reported in the applications guidemore » provided by the scanner manufacturer. This retrospective study included all hospitalized patients who underwent a diagnostic CT scan between 1 January 2011 and 31 December 2012. A total of 8777 CT scans were performed in this two-year period. Simple brain scan was the CT type performed the most (74.3%) followed by contrasted brain scan (6.1%) and head angiotomography (5.7%). The average number of CT scans per patient was 2.83; the average effective dose per patient was 7.9 mSv; the mean estimated radiation dose was significantly higher for surgical (9.1 mSv) than non-surgical patients (6.0 mSv). Three percent of the patients had 10 or more brain CT scans and exceeded the organ radiation dose threshold set by the International Commission on Radiological Protection for deterministic effects of the eye-lens. Although radiation patient doses from CT scans were in general relatively low, 187 patients received a high effective dose (>20 mSv) and 3% might develop cataract from cumulative doses to the eye lens.« less
Patient dose estimation from CT scans at the Mexican National Neurology and Neurosurgery Institute
NASA Astrophysics Data System (ADS)
Alva-Sánchez, Héctor; Reynoso-Mejía, Alberto; Casares-Cruz, Katiuzka; Taboada-Barajas, Jesús
2014-11-01
In the radiology department of the Mexican National Institute of Neurology and Neurosurgery, a dedicated institute in Mexico City, on average 19.3 computed tomography (CT) examinations are performed daily on hospitalized patients for neurological disease diagnosis, control scans and follow-up imaging. The purpose of this work was to estimate the effective dose received by hospitalized patients who underwent a diagnostic CT scan using typical effective dose values for all CT types and to obtain the estimated effective dose distributions received by surgical and non-surgical patients. Effective patient doses were estimated from values per study type reported in the applications guide provided by the scanner manufacturer. This retrospective study included all hospitalized patients who underwent a diagnostic CT scan between 1 January 2011 and 31 December 2012. A total of 8777 CT scans were performed in this two-year period. Simple brain scan was the CT type performed the most (74.3%) followed by contrasted brain scan (6.1%) and head angiotomography (5.7%). The average number of CT scans per patient was 2.83; the average effective dose per patient was 7.9 mSv; the mean estimated radiation dose was significantly higher for surgical (9.1 mSv) than non-surgical patients (6.0 mSv). Three percent of the patients had 10 or more brain CT scans and exceeded the organ radiation dose threshold set by the International Commission on Radiological Protection for deterministic effects of the eye-lens. Although radiation patient doses from CT scans were in general relatively low, 187 patients received a high effective dose (>20 mSv) and 3% might develop cataract from cumulative doses to the eye lens.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Mountain goat abundance and population trends in the Olympic Mountains, Washington, 2011
Jenkins, Kurt; Happe, Patricia; Griffin, Paul C.; Beirne, Katherine; Hoffman, Roger; Baccus, William
2011-01-01
We conducted an aerial helicopter survey between July 18 and July 25, 2011, to estimate abundance and trends of introduced mountain goats (Oreamnos americanus) in the Olympic Mountains. The survey was the first since we developed a sightability correction model in 2008, which provided the means to estimate the number of mountain goats present in the surveyed areas and not seen during the aerial surveys, and to adjust for undercounting biases. Additionally, the count was the first since recent telemetry studies revealed that the previously defined survey zone, which was delineated at lower elevations by the 1,520-meter elevation contour, did not encompass all lands used by mountain goats during summer. We redefined the lower elevation boundary of survey units before conducting the 2011 surveys in an effort to more accurately estimate the entire mountain goat population. We surveyed 39 survey units, comprising 39 percent of the 59,615-hectare survey area. We estimated a mountain goat population of 344±44 (standard error, SE) in the expanded survey area. Based on this level of estimation uncertainty, the 95-percent confidence interval ranged from 258 to 430 mountain goats at the time of the survey. To permit comparisons of mountain goat populations between the 2004 and 2011 surveys, we recomputed population estimates derived from the 2004 survey using the newly developed bias correction methods, and we computed the 2004 and 2011 surveys based on comparable survey zone definitions (for example, using the boundaries of the 2004 survey). The recomputed estimates of mountain goat populations were 217±19 (SE) in 2004 and 303±41(SE) in 2011. The difference between the current 2011 population estimate (344±44[SE]) and the recomputed 2011 estimate (303±41[SE]) reflects the number of mountain goats counted in the expanded lower elevation portions of the survey zone added in 2011. We conclude that the population of mountain goats has increased in the Olympic Mountains at an average rate of 4.9±2.2(SE) percent annually since 2004. We caution that the estimated rate of population growth may be conservative if severe spring weather deterred some mountain goats from reaching the high-elevation survey areas during the 2011 surveys. If the estimated average rate of population growth were to remain constant in the future, then the population would double in approximately 14-15 years.
Estimating the parasitaemia of Plasmodium falciparum: experience from a national EQA scheme
2013-01-01
Background To examine performance of the identification and estimation of percentage parasitaemia of Plasmodium falciparum in stained blood films distributed in the UK National External Quality Assessment Scheme (UKNEQAS) Blood Parasitology Scheme. Methods Analysis of performance for the diagnosis and estimation of the percentage parasitaemia of P. falciparum in Giemsa-stained thin blood films was made over a 15-year period to look for trends in performance. Results An average of 25% of participants failed to estimate the percentage parasitaemia, 17% overestimated and 8% underestimated, whilst 5% misidentified the malaria species present. Conclusions Although the results achieved by participants for other blood parasites have shown an overall improvement, the level of performance for estimation of the parasitaemia of P. falciparum remains unchanged over 15 years. Possible reasons include incorrect calculation, not examining the correct part of the film and not examining an adequate number of microscope fields. PMID:24261625
Dexter, Franklin; Epstein, Richard H; Dutton, Richard P; Kordylewski, Hubert; Ledolter, Johannes; Rosenberg, Henry; Hindman, Bradley J
2016-12-01
Anesthesiologists providing care during off hours (ie, weekends or holidays, or cases started during the evening or late afternoon) are more likely to care for patients at greater risk of sustaining major adverse events than when they work during regular hours (eg, Monday through Friday, from 7:00 AM to 2:59 PM). We consider the logical inconsistency of using subspecialty teams during regular hours but not during weekends or evenings. We analyzed data from the Anesthesia Quality Institute's National Anesthesia Clinical Outcomes Registry (NACOR). Among the hospitals in the United States, we estimated the average number of common types of anesthesia procedures (ie, diversity measured as inverse of Herfindahl index), and the average difference in the number of common procedures between 2 off-hours periods (regular hours versus weekends, and regular hours versus evenings). We also used NACOR data to estimate the average similarity in the distributions of procedures between regular hours and weekends and between regular hours and evenings in US facilities. Results are reported as mean ± standard error of the mean among 399 facilities nationwide with weekend cases. The distributions of common procedures were moderately similar (ie, not large, <.8) between regular hours and evenings (similarity index .59 ± .01) and between regular hours and weekends (similarity index, .55 ± .02). For most facilities, the number of common procedures differed by <5 procedures between regular hours and evenings (74.4% of facilities, P < .0001) and between regular hours and weekends (64.7% of facilities, P < .0001). The average number of common procedures was 13.59 ± .12 for regular hours, 13.12 ± .13 for evenings, and 9.43 ± .13 for weekends. The pairwise differences by facility were .13 ± .07 procedures (P = .090) between regular hours and evenings and 3.37 ± .12 procedures (P < .0001) between regular hours and weekends. In contrast, the differences were -5.18 ± .12 and 7.59 ± .13, respectively, when calculated using nationally pooled data. This was because the numbers of common procedures were 32.23 ± .05, 37.41 ± .11, and 24.64 ± .12 for regular hours, evenings, and weekends, respectively (ie, >2x the number of common procedures calculated by facility). The numbers of procedures commonly performed at most facilities are fewer in number than those that are commonly performed nationally. Thus, decisions on anesthesia specialization should be based on quantitative analysis of local data rather than national recommendations using pooled data. By facility, the number of different procedures that take place during regular hours and off hours (diversity) is essentially the same, but there is only moderate similarity in the procedures performed. Thus, at many facilities, anesthesiologists who work principally within a single specialty during regular work hours will likely not have substantial contemporary experience with many procedures performed during off hours.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Thanh, Nguyen Xuan; Jonsson, Egon
2014-01-01
To estimate the annual health services utilization (HSU) cost per person with FASD by sex and age; the lifetime HSU cost per person with FASD by sex, and the annual HSU cost of FASD for Alberta by sex. The HSU costs of FASD including physician, outpatient, and inpatient services were described by sex and age. The costs per person-year were estimated by multiplying the average number of hospitalizations, outpatient visits, and physician visits per person-year by the average cost of each service. The annual HSU cost of FASD for Alberta was estimated by multiplying the annual HSU cost per person with FASD by the number of people living with FASD in Alberta in 2012. The lifetime HSU cost per person with FASD was estimated by sex for several lifespans ranging from 10 to 70 years. The annual cost of HSU for people with FASD in Alberta was $259 million, of which FAS accounted for 26%. The annual HSU cost per person with FAS and FASD were $6,200 and $5,600, respectively. The incremental annual HSU cost per person with FAS is $4,100 and with FASD is $3,400 as compared to the general population. The lifetime (70 years) HSU cost per person with FAS was $506,000 and with FASD was $245,000. Males had higher HSU costs than females. HSU costs of FAS and FASD varied greatly by age group. The findings suggest that FASD is a public health issue in Alberta and can be used for economic evaluations of FASD intervention and/or prevention in the province.
Thomas, M E; Klinkenberg, D; Ejeta, G; Van Knapen, F; Bergwerff, A A; Stegeman, J A; Bouma, A
2009-10-01
An important source of human salmonellosis is the consumption of table eggs contaminated with Salmonella enterica serovar Enteritidis. Optimization of the various surveillance programs currently implemented to reduce human exposure requires knowledge of the dynamics of S. Enteritidis infection within flocks. The aim of this study was to provide parameter estimates for a transmission model of S. Enteritidis in laying-type chicken flocks. An experiment was carried out with 60 pairs of laying hens. Per pair, one hen was inoculated with S. Enteritidis and the other was contact exposed. After inoculation, cloacal swab samples from all hens were collected over 18 days and tested for the presence of S. Enteritidis. On the basis of this test, it was determined if and when each contact-exposed hen became colonized. A transmission model including a latency period of 1 day and a slowly declining infectivity level was fitted. The mean initial transmission rate was estimated to be 0.47 (95% confidence interval [CI], 0.30 to 0.72) per day. The reproduction number R(0), the average number of hens infected by one colonized hen in a susceptible population, was estimated to be 2.8 (95% CI, 1.9 to 4.2). The generation time, the average time between colonization of a "primary" hen and colonization of contact-exposed hens, was estimated to be 7.0 days (95% CI, 5.0 to 11.6 days). Simulations using these parameters showed that a flock of 20,000 hens would reach a maximum colonization level of 92% within 80 days after colonization of the first hen. These results can be used, for example, to evaluate the effectiveness of control and surveillance programs and to optimize these programs in a cost-benefit analysis.
Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey
2014-01-01
Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945
Particle number concentrations near the Rome-Ciampino city airport
NASA Astrophysics Data System (ADS)
Stafoggia, M.; Cattani, G.; Forastiere, F.; Di Menno di Bucchianico, A.; Gaeta, A.; Ancona, C.
2016-12-01
Human exposure to ultrafine particles (UFP) has been postulated to be associated with adverse health effects, and there is interest regarding possible measures to reduce primary emissions. One important source of UFP are airport activities, with aircraft take-offs being the most relevant one. We implemented two measurement campaigns of total particle number concentrations (PNC), a proxy for UFP, near a medium-size airport in central Italy. One-minute PNC averages were collected on June 2011 and January 2012 concurrently with 30-min average meteorological data on temperature and wind speed/direction. Data on minute-specific take-offs and landings were obtained by the airport authorities. We applied statistical regression models to relate PNC data to the presence of aircraft activities while adjusting for time trends and meteorology, and estimated the increases in PNC ±15 min before and after take-offs and landings. We repeated the analyses considering prevalent wind direction and by size of the aircraft. We estimated PNC increases of 5400 particles/cm3/minute during the 15 min before and after take-offs, with a peak of 19,000 particles/cm3/minute within 5 min after take-offs. Corresponding figures for landings were 1300 and 1000 particles, respectively. The highest PNC estimates were obtained when the prevailing wind came from the runway direction, and led to estimated PNC increases of 60,000 particles/cm3/minute within 5 min after take-offs. No main differences were noted from the exhaust of different types of aircrafts. The area surrounding Ciampino airport is densely inhabited, raising concerns about the potential adverse effects of long-term and short-term exposure to airport-borne UFP. A close monitoring of airport activities and emissions is mandatory to reduce the public health impact of the airport on the nearby population.
Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.
Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk
2014-01-01
This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.
Cañas-Álvarez, J J; Gónzalez-Rodríguez, A; Martín-Collado, D; Avilés, C; Altarriba, J; Baro, J A; De la Fuente, L F; Díaz, C; Molina, A; Varona, L; Piedrafita, J
2014-10-01
Demographic and pedigree analyses describe the structure and dynamics of livestock populations. We studied information recorded in the herdbooks of Asturiana de los Valles (AV; N = 458,806), Avileña-Negra Ibérica (ANI; N = 204,623), Bruna dels Pirineus (BP; N = 62,138), Morucha (Mo; N = 65,350), Pirenaica (Pi; N = 217,428), Retinta (Re; N = 135,300), and Rubia Gallega (RG; N = 235,511) beef breeds from their creation until 2009. All breeds have increased in the number of registered cows in recent years. In all breeds, herds do not behave as isolated entities and a high rate of exchange of breeding males between herds exists. A percentage of herds (12-52%) make some type of selection and sell bulls to other herds. There were large differences in average number of progeny per bull, ranging from 15.6 (AV) to 373.7 animals (RG, with a high incidence of AI). Generation interval estimates ranged from 4.7 (AV) to 7.6 (RG) yr in the sire pathway and from 5.95 (AV) to 7.8 (Mo) yr in the dam pathway. Density of pedigrees varied among breeds, with Pi, ANI, and Re having the more dense pedigrees, with average completeness indexes of more than 96% in the first generation and 80% when 6 generations were considered. A general increase in average inbreeding was observed in all breeds in the years analyzed. For animals born in 2009, average inbreeding coefficients ranged from 0.6 (BP) to 7.2% (Re) when all animals were considered and from 3.6 (Pi) to 17.6% (BP) when only inbred animals were considered. Due to the lack of completeness of pedigrees in most populations, inbreeding coefficients may be considered as a lower bound of the true parameters. The proportion of inbred animals tended to increase in the periods analyzed in all breeds. Differences between inbreeding and coancestry rates (except in RG) suggest the presence of population structure. Effective population size (Ne) based on the inbreeding rate estimated by regression ranged from 43 to 378 for Re and BP, whereas Ne estimates based on coancestry were greater, with a range of 100 for RG to 9,985 for BP. These facts suggest that an adequate mating policy can help to monitor inbreeding so as not to lose genetic variability. Effective number of ancestors in 2009 for 6 of the breeds ranged from 42 (RG) to 220 (AV), with BP having much a greater value, and was lower than was the effective number of founders in all breeds, suggesting the existence of bottlenecks.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Aarnisalo, Kaarina; Vihavainen, Elina; Rantala, Leila; Maijala, Riitta; Suihko, Maija-Liisa; Hielm, Sebastian; Tuominen, Pirkko; Ranta, Jukka; Raaska, Laura
2008-02-10
Microbial risk assessment provides a means of estimating consumer risks associated with food products. The methods can also be applied at the plant level. In this study results of microbiological analyses were used to develop a robust single plant level risk assessment. Furthermore, the prevalence and numbers of Listeria monocytogenes in marinated broiler legs in Finland were estimated. These estimates were based on information on the prevalence, numbers and genotypes of L. monocytogenes in 186 marinated broiler legs from 41 retail stores. The products were from three main Finnish producers, which produce 90% of all marinated broiler legs sold in Finland. The prevalence and numbers of L. monocytogenes were estimated by Monte Carlo simulation using WinBUGS, but the model is applicable to any software featuring standard probability distributions. The estimated mean annual number of L. monocytogenes-positive broiler legs sold in Finland was 7.2x10(6) with a 95% credible interval (CI) 6.7x10(6)-7.7x10(6). That would be 34%+/-1% of the marinated broiler legs sold in Finland. The mean number of L. monocytogenes in marinated broiler legs estimated at the sell-by-date was 2 CFU/g, with a 95% CI of 0-14 CFU/g. Producer-specific L. monocytogenes strains were recovered from the products throughout the year, which emphasizes the importance of characterizing the isolates and identifying strains that may cause problems as part of risk assessment studies. As the levels of L. monocytogenes were low, the risk of acquiring listeriosis from these products proved to be insignificant. Consequently there was no need for a thorough national level risk assessment. However, an approach using worst-case and average point estimates was applied to produce an example of single producer level risk assessment based on limited data. This assessment also indicated that the risk from these products was low. The risk-based approach presented in this work can provide estimation of public health risk on which control measures at the plant level can be based.
Orthodontic manpower requirements of Trinidad and Tobago.
Bourne, C O
2012-09-01
A study was done to estimate the orthodontic manpower requirements of Trinidad and Tobago. A questionnaire was administered via e-mail to 9 of 11 orthodontists. Information from a population census, a report on the orthodontic treatment needs of children in Trinidad and Tobago and this questionnaire were used to calculate the number of orthodontists and chairside orthodontic assistants needed in Trinidad and Tobago. On average, 50 per cent of the 289 patients treated by each orthodontist in Trinidad and Tobago annually are children. Approximately, 13 360 patients can be expected to demand orthodontic treatment every year in this country. The number of orthodontists and chairside assistants required to treat these patients was estimated to be 44 and 154, respectively. Currently, Trinidad and Tobago only has a quarter of the number of orthodontists and orthodontic chairside assistants required to treat the number of patients in need. As the demand is relatively high in Trinidad and Tobago and the number of orthodontists has increased slowly and inadequately for the past decade, the orthodontists are likely to remain adequately employed and happy with their job unlike dentists who are currently in private practice for less than a year.
Number of perceptually distinct surface colors in natural scenes.
Marín-Franch, Iván; Foster, David H
2010-09-30
The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.
Zhou, Y; Jenkins, M E; Naish, M D; Trejos, A L
2016-08-01
The design of a tremor estimator is an important part of designing mechanical tremor suppression orthoses. A number of tremor estimators have been developed and applied with the assumption that tremor is a mono-frequency signal. However, recent experimental studies have shown that Parkinsonian tremor consists of multiple frequencies, and that the second and third harmonics make a large contribution to the tremor. Thus, the current estimators may have limited performance on estimation of the tremor harmonics. In this paper, a high-order tremor estimation algorithm is proposed and compared with its lower-order counterpart and a widely used estimator, the Weighted-frequency Fourier Linear Combiner (WFLC), using 18 Parkinsonian tremor data sets. The results show that the proposed estimator has better performance than its lower-order counterpart and the WFLC. The percentage estimation accuracy of the proposed estimator is 85±2.9%, an average improvement of 13% over the lower-order counterpart. The proposed algorithm holds promise for use in wearable tremor suppression devices.
MX Siting Investigation. Geotechnical Evaluation. Aggregate Resources Study, Whirlwind Valley, Utah.
1980-06-06
Boulders The estimated percentage of boulders and and/or cobbles is based on an appraisal of the en - Cobbles, tire deposit. Cobbles have an average dia ...UNCLASSIFIED FN-TR- 3 ?-E N MEEOEEEE- -EE E-NE-- B~~~ 8 111___ ~ I 25 1.25 1.4I’* I1.6 MICROCOPY RESOLUI ION 1[1 ,1 LIIARI MX SITING INVESTIGATION...REPORT NUMBER 2. GOVT ACCESSION NO,. 3 . RECIPIENT’S CATALOG NUMBEREd --r,-’ 3 - e 4. TITLE (and Subttle) 5 LJSI" P Lt" - - " . TYPE OF REPORT & PERIOD
Limits to the Stability of Pulsar Time
NASA Technical Reports Server (NTRS)
Petit, Gerard
1996-01-01
The regularity of the rotation rate of millisecond pulsars is the underlying hypothesis for using these neutron stars as 'celestial clocks'. Given their remote location in our galaxy and to our lack of precise knowledge on the galactic environment, a number of phenomena effect the apparent rotation rate observed on Earth. This paper reviews these phenomena and estimates the order of magnitude of their effect. It concludes that an ensemble pulsar time based on a number of selected millisecond pulsars should have a fractional frequency stability close to 2 x 10(sup -15) for an averaging time of a few years.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
Modeling level of urban taxi services using neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Wong, S.C.; Tong, C.O.
1999-05-01
This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less
Economic burden of seasonal influenza in the United States.
Putri, Wayan C W S; Muscatello, David J; Stockwell, Melissa S; Newall, Anthony T
2018-05-22
Seasonal influenza is responsible for a large disease and economic burden. Despite the expanding recommendation of influenza vaccination, influenza has continued to be a major public health concern in the United States (U.S.). To evaluate influenza prevention strategies it is important that policy makers have current estimates of the economic burden of influenza. To provide an updated estimate of the average annual economic burden of seasonal influenza in the U.S. population in the presence of vaccination efforts. We evaluated estimates of age-specific influenza-attributable outcomes (ill-non medically attended, office-based outpatient visit, emergency department visits, hospitalizations and death) and associated productivity loss. Health outcome rates were applied to the 2015 U.S. population and multiplied by the relevant estimated unit costs for each outcome. We evaluated both direct healthcare costs and indirect costs (absenteeism from paid employment) reporting results from both a healthcare system and societal perspective. Results were presented in five age groups (<5 years, 5-17 years, 18-49 years, 50-64 years and ≥65 years of age). The estimated average annual total economic burden of influenza to the healthcare system and society was $11.2 billion ($6.3-$25.3 billion). Direct medical costs were estimated to be $3.2 billion ($1.5-$11.7 billion) and indirect costs $8.0 billion ($4.8-$13.6 billion). These total costs were based on the estimated average numbers of (1) ill-non medically attended patients (21.6 million), (2) office-based outpatient visits (3.7 million), (3) emergency department visit (0.65 million) (4) hospitalizations (247.0 thousand), (5) deaths (36.3 thousand) and (6) days of productivity lost (20.1 million). This study provides an updated estimate of the total economic burden of influenza in the U.S. Although we found a lower total cost than previously estimated, our results confirm that influenza is responsible for a substantial economic burden in the U.S. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Gillespie, Jonathan; Masey, Nicola; Heal, Mathew R.; Hamilton, Scott; Beverland, Iain J.
2017-02-01
Determination of intra-urban spatial variations in air pollutant concentrations for exposure assessment requires substantial time and monitoring equipment. The objective of this study was to establish if short-duration measurements of air pollutants can be used to estimate longer-term pollutant concentrations. We compared 5-min measurements of black carbon (BC) and particle number (PN) concentrations made once per week on 5 occasions, with 4 consecutive 1-week average nitrogen dioxide (NO2) concentrations at 18 locations at a range of distances from busy roads in Glasgow, UK. 5-min BC and PN measurements (averaged over the two 5-min periods at the start and end of a week) explained 40-80%, and 7-64% respectively, of spatial variation in the intervening 1-week NO2 concentrations for individual weeks. Adjustment for variations in background concentrations increased the percentage of explained variation in the bivariate relationship between the full set of NO2 and BC measurements over the 4-week period from 28% to 50% prior to averaging of repeat measurements. The averages of five 5-min BC and PN measurements made over 5 weeks explained 75% and 33% respectively of the variation in average 1-week NO2 concentrations over the same period. The relatively high explained variation observed between BC and NO2 measured on different time scales suggests that, with appropriate steps to correct or average out temporal variations, repeated short-term measurements can be used to provide useful information on longer-term spatial patterns for these traffic-related pollutants.
The Use of Satellite Imagery for Domestic Law Enforcement
2013-12-01
AND SUBTITLE THE USE OF SATELLITE IMAGERY FOR DOMESTIC LAW ENFORCEMENT 5. FUNDING NUMBERS 6. AUTHOR( S ) Raymond M. Schillinger...Postgraduate School is a true honor. The opportunity to attend the Center for Homeland Security and Defense is one of those pinnacles that will be hard to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and
2012-04-01
spr2012-Final.indd 1 5/9/2012 10:54:32 AM DOD Spring 2012 INSIGHTS A publication of the Department of Defense High Performance Computing...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 2012 2. REPORT TYPE 3. DATES COVERED 4. TITLE
Bridging the Gap (BRIEFING CHARTS)
2007-03-05
1 Defense Advanced Research Projects Agency “Bridging the Gap” Dr. Robert F. Leheny Deputy Director Report Documentation Page Form ApprovedOMB No...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 05 MAR 2007 2. REPORT TYPE N/A 3. DATES COVERED
Unmanned Systems Roadmap 2007-2032
2007-01-01
Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE...unmanned systems technology to ensure an effective return on the Department’s investment. 1 . Reconnaissance and Surveillance. Some form of reconnaissance
Minimization of Thruster Plume Effects on Spacecraft Surfaces
2007-05-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 ...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 26-03-2007 2. REPORT TYPE...Approved for public release; distribution is unlimited 1 The surfaces from user-defined CAD models of the spacecraft are loaded into COLISEUM
Cholesteric Liquid Crystal Glass Platinum Acetylides
2014-06-01
M. Krein AFRL/RXAP Ronald F. Ziolo, Eduardo Arias, and Ivana Moggio 2Centro de Investigacion en Quimica Aplicada(CIQA) Albert Fratini...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) June 2014 2. REPORT TYPE Interim 3
Synthesis of Chromophores for Nonlinear Optics Applications
2010-03-12
Investigacion de Quimica Aplicada Blvd. Enrique reyna, No. 140 Saltillo, Coahuila, Mexico 25253 AFOSR FA9550-09- 1 -0017 12 March 2010...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 12-03-2010
2007-03-01
Additionally, research shows that many over the past decade have proposed interoperability measures, notable of which have been: 1 ) the DoD...Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the...to comply with a collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE MAR 2007 2. REPORT TYPE 3
Total Ownership Cost a Decade Into the 21st Century
2012-04-30
Approved for public release; distribution is unlimited. Prepared for the Naval Postgraduate School, Monterey, CA 93943. Total Ownership Cost a Decade...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...Naval Postgraduate School,Graduate School of Business and Public Policy,Monterey,CA,93943 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING
Improving Balance in TBI Using a Low-Cost Customized Virtual Reality Rehabilitation Tool
2017-10-01
AWARD NUMBER: W81XWH-14-2-0150 TITLE: Improving Balance in TBI Using a Low- Cost Customized Virtual Reality Rehabilitation Tool PRINCIPAL...PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for Public ...DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per
Development of a Ballistic Impact Detection System
2004-09-01
body surface remains the largest variable to overcome. The snug fit of the body amour stabilizes the sensors and their response . The data from the...estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 22 19a. NAME OF RESPONSIBLE PERSON
NASA Technical Reports Server (NTRS)
Rubincam, David Parry
2012-01-01
Less than catastrophic meteoroid impacts over 10(exp 5) years may change the shape of small rubble-pile satellites in binary NEAs, lengthening the average BYORP (binary Yarkovsky-Radzievskii-Paddack) rate of orbital evolution. An estimate of shape-shifting meteoroid fluxes give numbers close enough to causing random walks in the semimajor axis of binary systems to warrant further investigation
2016-09-12
AFRL-RX-WP-JA-2017-0209 TWO BEAM ENERGY EXCHANGE IN HYBRID LIQUID CRYSTAL CELLS WITH PHOTOREFRACTIVE FIELD CONTROLLED BOUNDARY...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the... CRYSTAL CELLS WITH PHOTOREFRACTIVE FIELD CONTROLLED BOUNDARY CONDITIONS (POSTPRINT) 5a. CONTRACT NUMBER FA8650-16-D-5402-0001 5b. GRANT